My goal is to set up an environment where CircleCI would run my e2e tests on BrowserStack in different browsers.
My tests are assuming that there is a mock server running. (E.g. tests are checking whether a certain call to the mock server has been made or not.)
I learned that there is such a thing as local testing for BrowserStack, but whenever I'd like to start the mock server on port 65432 it says the port is already being used. Error: listen EADDRINUSE :::65432
I have an Express mock server running (on port 65432), tests are ran by Nightwatch against Selenium server.
So far I only saw examples which run tests against homepages which are living on the internet (like google.com), but I would like to run my own mock server locally and run my tests against it.
Is there a way where I could run a mock server and run my tests with Nightwatch and Selenium against that mock server and all done by a CI tool running the tests on BrowserStack?
If you have a internal website (not accessible to public) hosted on your machine (using mock server - Tomcat, Nginx, Express Mock Server, etc) and wish to run the Selenium based scripts to test that application on BrowserStack, then you can use the Local Testing feature.
You simply need to run the binary file that they provide on your local machine (where the internal website is accessible) and set the capability 'browserstack.local' to 'true'. Hence the tests running on BrowserStack will be able to access your internal website. I would recommend you to review the documentation here. You can checkout the documentation on NightwatchJS-BrowserStack here.
If you wish to trigger the tests using CircleCI. They provide the plug-in for CircleCI as well, read more on the same here. The plug-in itself will handle the Local Testing for you in that case.
For future readers: my problem was parallelism - I set 2 workers (child processes basically) with the following object:
"test_workers": {
'enabled': true,
'workers': 2
}
I found this setup from one of the examples which I can't find anymore, but if you are running your Nightwatch tests with your own mock server this might mess up the test suite since every worker will try to spin up a mock server for it's own tests, which will obviously fail.
Related
We have a suite of automated regression tests driven using Selenium for an Angular app with a .NET Core WEB API backend.
The intention is to include some automated security testing as part of our overnight build/test run.
From reading so far it looks like running ZAP as an intercepting proxy between Selenium and our web application is the way to go (see 'Proxy Regression/Unit Tests' in https://www.zaproxy.org/docs/api/#exploring-the-app) but I'm struggling to find clear documentation/examples.
What is the simplest way to achieve this using OWASP ZAP, and are there any definitive articles/examples available?
Start with the packaged full scan: https://www.zaproxy.org/docs/docker/full-scan/
Set the port and then proxy your selenium tests through ZAP. Use the -D parameter to pause ZAP until your tests have finished. For more ZAP automation options see https://www.zaproxy.org/docs/automate/
When we run #QuarkusTest annotated tests, one of these tests run the Quarkus test extension and start Quarkus in dev mode. Quarkus will then remain running for the duration of the test run. This is how we can achieve fast debugging.
But my use case is bit different from it. I need to verify an application which is running remotely. Is there a way I can tell Quakus not to start application while I'm using the #QuarkusTest annotation?
one possible way to achieve it that I simply write JUnits and boiler-plate code to connect the api and then verify it. However, I want to use Quakus framework while stopping Quarkus not to run application.
I am looking for ways to set up like a central 'hub' for Selenium in my work, allowing anyone to access in within the company. For example, Tester A writes test scripts, the Person B can run without having to manually copy over the test scripts to their local workstation)
So far, I've only thought of installing Selenium in a VM which will then execute as per normal. But if I run Selenium Grid, it will run VMs within VM (?). My only concern with VMs is that it'd run slowly.
If anyone can think of a better solution or recommendation please do give me some advice. Thank you in advance.
One idea. You can create an infrastructure combining Jenkins/Selenium/Amazon.
The following is my solution from another post.
You can do it with a grid.
First of all you need to create a Selenium hub with an EC2 ubuntu 14.04 AMI without UI and link it as a jenkins slave to your Jenkins master. Or as directly a master. What you want. Only command line. Download Selenium Server standalone. (be careful on downloading the version. If you Download the Selenium3Beta, things could change). Here you can configure the HUB. You can also add the Selenium Hub as a service and configure to run automatically at server start. its important that you open the Selenium default port (or the one that you configured) so the nodes can connect to it. You can do that on the Amazon EC2 console when you have created your instance. You just need to add a security group with an inbound rule for TCP in the port you want for the IPs you want.
Then, you can create a Windows server 2012 instance server (for example, that's what I did), and do the same process. Download the same version for Selenium and the chromedriver (there is no need to download any firefoxdriver for Selenium versions before Selenium3). Generate a txt file and prepare the Selenium command to link to the HUB as a NODE. And convert it to *.bat in order to execute it. If you want to run the bat at start you can create a service with the task scheduler or use NSSM (https://nssm.cc/). Don't forget to add the rules to the security groups for this machine too!
Next, create the Jenkins server. You can use the Selenium Hub as the Jenkins master or as a slave.
Last step is configuring a job to be run in the Jenkins-Selenium machine. This job needs to be linked to your code repository (git, mercurial...) Using the parametrized build plugion for jenkins you can tell that job to pull the revision you want (where every developer can pull the revision with the new changes and new tests) and run the Selenium tests in that build with the current breanch/revision and against one unique selenium. You can use ANT or Maven to run the Selenium tests in Jenkins.
May be it's complicated to understand because there are so many concepts here but it's robust and it works fine!
If you have doubts, tell me!
If Internet Explorer is not one of the browsers on which you must run your automation tests, I would recommend that you consider docker selenium.
Selenium is providing pre-configured docker images for both Selenium Hub and Node ( refer here for more information ). For making use of docker selenium all you need to do is find a machine (preferably unix machine), install docker on it by following instructions detailed here and then start the hub and node by starting off those containers. In the case of docker you can literally transform a VM (or) a physical machine into a VM farm and yet not have to worry about slowness etc., because I believe docker is optimised for these and it runs your VM as a process.
Resorting to using Amazon cloud for running your selenium nodes is all fine, but if you have corporate policies that prevent in-coming traffic from the internet into your intranet region, then I am not sure how far Amazon cloud would be useful.
Also remember that Jenkins is not something that is absolutely required but is more of a good to have part in the setup because it would let anyone run their tests from a web UI. This will however require that all your tests are checked-in and made available in a central version control system in your organization.
PS : The reason why called out Internet Explorer as an exception is because IE runs only on windows and there are no docker images (yet) for windows. All the docker images are UNIX based images.
I can't found any question/answer about that (probably I don't know how to find it...)
Could somebody give me a global idea to execute +200 Selenium webdriver tests (Python) from cloud servers/tools?
Thanks!!
rgzl
Another way is Saucelabs, using this service you'll be able to just send your Selenium
Java/Python tests
to their Cloud infrastructure for execution. The benefits of such testing are obvious – no need to waste time and resources setting up and maintaining your own VM farm, and additionally you can run your test suite in various browsers in parallel. Also no need to share any sensitive data, source code and databases.
As said in this acticle:
Of course inserting this roundtrip across the Internet is not without cost. The penalty of running Selenium tests this way is that they run quite slowly, typically about 3 times slower in my experience. This means that this is not something that individual developers are going to do from their workstations.
To ease the integration of this service into your projects, maybe you'll have to write a some kind of saucelabs-adapter, that will do the necessary SSH tunnel setup/teardown and Selenium configuration, automatically as part of a test.
And for a better visualization:
Here's a global idea:
Use Amazon Web Services.
Using AWS, you can have a setup like this:
1 Selenium Grid. IP: X.X.X.X
100 Selenium nodes connecting to X.X.X.X:4444/wd/register
Each Selenium node has a node config, running 2 maxSessions at once. (depending on size of course)
Have also, a Continuous integration server like Jenkins, run your Python tests Against X.X.X.X grid.
I am developing a Rails 3.x app under Windows 7 and am using Cucumber and Capybara for testing.
I have set up an Ubuntu VM and deployed my app to that.
I want to use Cucumber / Capybara to test my app on the VM after each deployment - after all, this is a different OS and I want to leverage the power of my test suite to test different browsers (Firefox, Chrome and IE) against the deployed site.
In theory, it seems as though I have 3 main options:
1) Run Cucumber locally, with a local browser and hit the remote server (VM guest)
2) Run Cucumber locally with a remote browser hitting the remote server
3) Connect to the VM guest and run Cucumber locally under the VM
It seems to me that option 1) best simulates the real world, ie NOT running the browser on the remote server.
However, I am not sure if this is possible, or how to configure things to achieve it. In particular I am not clear whether or not I need Selenium Server in this case, and if I do, whether I should be deploying it locally (on the Windows dev machine) or remotely (in the guest VM where the app is deployed).
I have done a fair bit of Google searching about this issue and have looked at such posts as:
Problems with connecting to VM with cucumber remote test
https://github.com/leonid-shevtsov/headless
and while these give some clues such as the use of (which are exactly as described in the official Capybara docs at http://rubydoc.info/github/jnicklas/capybara)
Capybara.app_host = "http://hostname:4444"
Capybara.default_driver = :selenium
Capybara.run_server = false
the examples given seem to be referring to the browser running remotely (eg my Options 2 or 3) and I am still not sure what is the best approach to take.