SauceLabs Pass/Fail using behat - behat

I am trying to add Pass/Fail status in Saucelabs whenever I run an Automated test but I can't figure out how shall I do it. I use Behat - Selenium Driver. I read the documentation but it didn't help me.
I tried to use the Saucelabs Rest API guide and I launch in my console the following
curl -X PUTĀ \
-s -d '{"passed":true}' \
-u https://USERNAME:APIKEY#saucelabs.com/rest/v1/users/USERNAME
But it doesn't work.

I think you need the session Id
ownCloud uses:
curl -X PUT -s -d "{\"passed\": $PASSED}" -u $SAUCE_USERNAME:$SAUCE_ACCESS_KEY https://saucelabs.com/rest/v1/$SAUCE_USERNAME/jobs/$SAUCELABS_SESSIONID
see: https://github.com/owncloud/core/blob/master/tests/travis/start_ui_tests.sh#L235
and this Id is pulled from the URL: https://github.com/owncloud/core/blob/master/tests/ui/features/bootstrap/FeatureContext.php#L171
but there might be better ways of getting it

Related

How to change URL from CI/CD in Gitlab on Automation Testing using Robot Framework

I have 3 env (dev/test/prod) and I want to change the URL from the gitlab-ci.yml so I can choose from CI/CD which env I want to run the tests.
Currently I run:
- robot -v BROWSER:Chrome -d results/Chrome test/test.robot
I can't find any info on this. Is it possible?
You can pass URL though command line, try below solution
- robot -v url:http://your_url -v BROWSER:Chrome -d results/Chrome test/test.robot

scrapyd running as daemon cannot find spider or project

The name of spider is quotes14 and it works well from command line
i.e if I run scrapy crawl quotes14 from the directory /var/www/html/sprojects/tutorial/ it works fine in command line.
I have scrapyd running as daemon.
My scrapy spider files are present here: /var/www/html/sprojects/tutorial/tutorial/spiders
I have many spiders and other files under the above directory and project is /var/www/html/sprojects/tutorial/tutorial/
I have tried
curl http://localhost:6800/schedule.json -d project=tutorial -d spider=spiders/quotes14
curl http://localhost:6800/schedule.json -d project=/var/www/html/sprojects/tutorial/tutorial/tutorial -d spider=quotes14
curl http://localhost:6800/schedule.json -d project=/var/www/html/sprojects/tutorial/tutorial/ -d spider=quotes14
curl http://localhost:6800/schedule.json -d project=/var/www/html/sprojects/tutorial/tutorial/tutorial -d spider=spiders/quotes14
It either says project not found or spider not found
Please help
In order to use the schedule endpoint you have to first deploy the spider to the daemon. The docs tell you how to do this.
Deploying your project involves eggifying it and uploading the egg to Scrapyd via the addversion.json endpoint. You can do this manually, but the easiest way is to use the scrapyd-deploy tool provided by scrapyd-client which will do it all for you.

Create docker image to run selenium tests on different browser versions

I am currently learning to use docker to run selenium tests.
However, to run tests on different versions of the browser, it requires creating our own image.
I tried few ways but failed to run them.
I used the docker file at below path:
https://hub.docker.com/r/selenium/node-chrome/~/dockerfile/
and tried to build the image by using the following command:
docker build -t my-chrome-image --build-arg CHROME_DRIVER_VERSION=2.23 --build-arg CHROME_VERSION=google-chrome-beta=53.0.2785.92-1 NodeChrome
Can anyone guide me on how to implement the same?
Regards,
Ashwin Karangutkar
Use
docker build -t my-chrome-image --build-arg CHROME_DRIVER_VERSION=2.23 --build-arg CHROME_VERSION=google-chrome-beta <path_to_Dockerfile>
I am using elgalu/selenium.
docker run -d --name=grid -p 4444:24444 -p 5900:25900 --shm-size=1g elgalu/selenium
And looking in elgalu looks like you can change the browser versions.
Adding -e FIREFOX_VERSION=38.0.6 to the docker run command.

How to upload APK to Saucelabs

I want to upload my APK to Saucelabs, How can I do that?
Is there any tab do so ?
I am trying with Curl command as well, which is not working for me
Yes, U have to use curl command correctly.
Use below Link to download curl :
http://curl.haxx.se/download.html
After that use below curl command :
curl -u YOUR_SAUCE_USERNAME:YOUR_SAUCE_ACCESS_KEY -X POST -H "Content-Type: application/octet-stream" https://saucelabs.com/rest/v1/storage/YOUR_SAUCE_USERNAME/YOUR_ANDROID_APP.apk

using GNU Parallel for pagination

I like GNU Parallel and have tried to use it for pagination but need help to get it working successfully. Basically, I am following the use cases on the Quickblox API guide to get data:
http://quickblox.com/developers/Custom_Objects#Get_related_records
The maximum number of records one can retrieve is 100 per page, and one can only retrieve a page at a time. These are specified via the -d parameter. I want to use GNU Parallel to obtain pages 1..79.
I found a thread that explains how to use GNU Parallel when you have parameters that take on many different values but haven't been able to successfully adapt it to my case.
GNU Parallel - parallelize serial command line programs without changing them
Your help would be greatly appreciated!
curl -X GET -H "QB-Token: 7de49c25f44e557aeed1b635" -d "page=3" -d "per_page=100" https://api.quickblox.com/users.xml > qblox_users_page3_100perpage
If you want output in different files:
parallel 'curl -X GET -H "QB-Token: 7de49c25f44e557aeed1b635" -d "page={}" -d "per_page=100" https://api.quickblox.com/users.xml > qblox_users_page{}_100perpage' ::: {1..79}
If you want it in a single big file:
parallel -k 'curl -X GET -H "QB-Token: 7de49c25f44e557aeed1b635" -d "page={}" -d "per_page=100" https://api.quickblox.com/users.xml' ::: {1..79} > qblox_users