How to break Travis CI build if Appium/Mocha tests fail? - testing

I have a Travis CI project which builds an iOS app then starts Appium and runs tests with Appium/Mocha.
The problem is that even though the Mocha tests fail and throw exception, the shell script which runs them via Gulp still exits with 0 and the build is deemed passing.
How can I make the build break/fail when the Mocha tests fail?

Here is how I managed to make this work:
Instead of running the Mocha tests via Gulp, run them directly from the shell script
Save the output to mocha.log besides displaying on stdout
./node_modules/.bin/mocha --reporter spec "appium/hybrid/*uat.js" 2>&1 | tee mocha.log
Check mocha.log for the string " failing" and exit with 1 if found
.
if grep -q " failing" mocha.log; then
exit 1
fi
The exit 1 will make the Travis build fail.

Related

How to force exit an vue cli thread on completion in a deploy script (e.g. ctrl c equivelant)

I'm using Laravel Forge to run a simple deploy script. npm run build calls 'vue-cli-service build'.
Script below. The script 'ends' on
DONE Build complete. The dist directory is ready to be deployed.
INFO Check out deployment instructions at https://cli.vuejs.org/guide/deployment.html
but the thread does not quit, which causes issues in forge (e.g. thinks it's timed out or failed when it hasn't).
How do I do the equivelant of ctrl-c in a terminal once this has finished, in the deploy script? I've seen threads on trap SIGINT / trap etc. but I'm still not really sure how to implement it.
It may be that I just include the exit callback fix noted here: Vue-cli-service serve build completion callback?
git pull origin $FORGE_SITE_BRANCH;
npm run build;
( flock -w 10 9 || exit 1
echo 'Restarting FPM...'; sudo -S service $FORGE_PHP_FPM reload ) 9>/tmp/fpmlock
if [ -f artisan ]; then
$FORGE_PHP artisan migrate --force
fi```
Try out to add Daemon termination command to the end of your deployment script
$FORGE_PHP artisan horizon:terminate

Allow job to run "reboot" command without causing failure

We have a large number of runners running a large number of jobs in one of our Gitlab CI/CD pipelines.
Each of these runners has a concurrency of 1, and they are of executor type shell.
[EDIT] These runners are AWS EC2 instances using Amazon Linux 2.
After certain jobs in the pipeline have completed, I would like them to run a reboot command to restart the runner.
However, some of these jobs will be tests. Currently, when I run the reboot command, the job fails. Obviously I can allow_failure so that the job passes, but this then means we have no way of determining whether or not the actual test has passed.
Originally, my test job looked like this:
after_script:
- sleep 1 && reboot
I have also tried the following variations:
after_script:
- sleep 15 && reboot
- exit 0
after_script:
- (sleep 15 ; reboot ) &
- exit 0
I've also tried running a shell script with the same contents.
All of these result in the same problem - ERROR: Job failed (system failure): aborted: terminated.
Can anyone think of a clever way round this?
In the end, I had to run this in a screen:
sudo screen -dm bash -c 'sleep 5; shutdown -r now;'
This allowed me, in a Gitlab CI pipeline, to run this as a script element, and immediately afterwards execute an exit command, like this:
after_script:
- sudo screen -dm bash -c 'sleep 5; shutdown -r now;'
- exit 0
This way, if a test fails - the job fails. If a test passes, the job passes. No need for allow_failure.
Unfortunately... I'm unsure of how to then contend with artifacts which take place after the after_script commands. If anyone has any ideas about that one, please add a comment here.

Gitlab-ci runner hangs after cypress tests

I am using gitlab-ci to tests a react application with cypress.
The test seems to pass but it hangs after executing cypress run command.
Thus, the test fails because of the timeout.
My service is the following
cypress:
image: cypress/base:10
script:
- serve -s build -l 3000 & yarn wait-on http://localhost:3000
- yarn cypress:run
And in my package.json
{
...
"scripts": {
"cypress:run": "cypress run --spec 'cypress/integration/**/*spec.js' --record false --config video=false"
},
...
}
This is the end of gitlab-ci runner's log:
✔ All specs passed! 01:01 11 11 - - -
Done in 73.82s.
ERROR: Job failed: execution took longer than 20m0s seconds
this issue occurs when a background task is running in the runner
To fix this I put in an or condition on the cypress:run step and kill the process if the result is not a success
there is another kill statement added in the step below also in case -parallel is used and multiple steps are running
Something like this
script:
# start the server in the background
- npx serve -s build -p 3001 &
# run Cypress tests in parallel
- yarn cypress:run || (ps -ef | grep [s]erve| awk '{print $2}' | xargs kill -9 )
- (ps -ef | grep [s]erve| awk '{print $2}' | xargs kill -9 ) || exit 0
I don't know if this can help someone, but it is worth to try updating node.
I had the same problem when using cypress 11.1.0 and node:16.17.1-slim docker image. I do not experience hang ups with node:16.18.1-slim anymore.
P.S. Along with node update I've updated Chrome from 106 to 107, so can't be sure what actually made the trick, just wanted to share with possible solution.

mochawesome cypress failure report

I am using cypress and mochawesome to generate reports for testing. I want to be alerted only when there is a failure. Is it possible to have the number of failing tests without parsing the json file?
The exit code of the cypress process will give you the number of failed tests:
npm run cypress
# ... cypress runs...
echo $? # print number of failed tests
Or for Windows cmd prompt: print exit code in cmd in windows os

How to fail gitlab CI build?

I am trying to fail a build in gitlab CI and get email notification about it.
My build script is this:
echo "Listing files!"
ls -la
echo "##########################Preparing build##########################"
mkdir build
cd build
echo "Generating make files"
cmake -G "Unix Makefiles" -D CMAKE_BUILD_TYPE=Release -D CMAKE_VERBOSE_MAKEFILE=on ..
echo "##########################Building##########################"
make
I have commited the code that breaks build. However, instead of finishing, build seems to be stuck in "running" state after exiting make. Last line is:
make: *** [all] Error 2
I also get no notifications.
How can i diagnose what is happening?
Upd.: in runner, following is repeated in log:
Submitting build <..> to coordinator...response error: 500
In production.log and sideq.log of gitlab_ci, following is written:
ERROR: Error connecting to Redis on localhost:6379 (ECONNREFUSED)
Full message with stacktrace is here: pastebin.
I have the same problem, i can help you with a workaround but im trying to fully fix it.
1- most of the times he hangs but the jobs keeps on going and actually finishes it, you can see the processes inside the machine, example: in my case it compiles and in the end it uses docker to publish the build, so the process docker doesn't exist until he reaches that phase.
2- to workaround this issue you have to make the data persistent and "retry" the download over and over again until he downloads everything he needs.
PS: stating what kind of OS you are using always helps.