Browserstack&Selenium - proxy configuration - selenium

I am trying to setup the browserstack and the local testing.
I opened a tunnel using the browserstack local client like below:
./BrowserStackLocal.exe myCodeToken -proxyHost MY_PROXY_IP -proxyPort MY_PROXY_PORT -v -force -forcelocal
So it is forwardin all the trafic through my local network.
In my local /etc/hosts file I have entry like below:
127.0.0.1 dev.mysite.com
Then when I'm executing my simple selenium test the browserstack virtualmachine are able to access my domain name and everything works fine. But in some sections where I have external scripts/css/images etc. for example gooogle analitics browserstack doesn't have access to them (and to the internet as well).
The thing is that my local machine uses the MY_PROXY_IP:MY_PROXY_PORT to access the internet and it looks like the browserstack tries to forward all the trafic through my machine and it doesn't work.
Do you have any ideas how to resolve this ?

You can try removing the -forcelocal parameter, as that is responsible for routing all traffic via your machine. On doing this, any public (external) css/images would be resolved directly and not via your machine.
Alternatively, if the -forcelocal parameter is necessary, you can try using it with -only parameter. In this case, the requests for domain(s) mentioned under -only, would be routed via your machine whereas rest of the requests would be resolved publicly.
You can execute the Local Testing binaries as follows:
BrowserStackLocal.exe <automate-key> <the-proxy-parameters> -forcelocal -only host_name,port_no,ssl_flag
OR
BrowserStackLocal.exe <automate-key> <the-proxy-parameters> -forcelocal -only host1,port1,ssl_flag,host2,port2,ssl_flag (For multiple hosts)
More details on different parameters that can be used while setting up the Local Testing connection available here - https://www.browserstack.com/local-testing#modifiers

Related

Is there a way when doing remote testing (qrcode etc) to control the exposed port?

I'm trying something akin to:
testcafe remote tests/dashboards/CustomerTests/myorg.js --qr-code -ports 9999,9998
But that still exposes the following URL:
http://192.168.134.205:58732/browser/connect
Is there any way to control the "58732" part of this? My internal network is pretty well locked down, and that port # changes every time.
Thanks!
It works correctly with the proper --ports argument (notice double --):
 
testcafe remote test.js --qr-code --ports 9999,9998
Using locally installed version of TestCafe.
Connecting 1 remote browser(s)...
Navigate to the following URL from each remote browser.
You can either enter the URL or scan the QR-code.
Connect URL: http://192.168.67.55:9999/browser/connect

Testing site with IP addr whitelist using BrowserStack automate + cloud hosted CI

I have a test system (various web pages / web applications), that is hosted in an environment accessible only via machines with IP addresses that are white listed. I control the white list.
Our CI system is cloud hosted (Gitlab), so VMs are spun up dynamically as needed to run automated integration tests as a part of the build pipeline.
The tests in question use BrowserStack automation to run Selenium based tests, which means the source IP addresses of the BrowserStack automation driven requests that hit the test environment are dynamic, as BS is cloud hosted. Also the IP addresses of our test runner machines that call / invoke the BrowserStack automation are dynamic as well.
The whole system worked fine before the intro of IP white listing on the test environment. Since white listing was enabled, the BrowserStack tests can no longer access the environment URLs (due to not being able to white list the dynamic IPs).
I have been trying to get the CI driven tests working again using BS "Local Testing" feature, outlined here https://www.browserstack.com/local-testing.
I have set-up a dedicated Linux VM with a static IP address (cloud hosted). I have installed and am running the BrowserStackLocal.exe binary, using our BS key. It starts up fine and says it has connected to BrowserStack via a web socket. My understanding is this should cause all http(s) etc requests that come from my CI / BrowserStack automation driven tests to be routed through that stand-alone machine (via BS cloud), resulting in it's static IP address being the source of the requests seen at the test environment. This IP addr is white listed.
This is the command that is running on the dedicated / static IP machine:
BrowserStackLocal.exe --{access key} --verbose 3
I have also tried the below, but it made no apparent difference:
BrowserStackLocal.exe --{access key} --force-local --verbose 3
However, this does not seem to work? Either through "live" testing if I try and access the test env directly through BrowserStack, or through BS automate. In both cases the http(s) requests all time out and cannot access our test environment URLs. Also even with --verbose 3 logging level enabled on the BrowserStackLocal.exe process, I never see any request being logged on the stand-alone / static IP machine when I try to run the tests in various ways.
So I am wondering if this is the correct way to solve this problem? Am I misunderstanding how to do this? Do I need to run the BrowserStackLocal.exe perhaps on the same CI runner machine that is invoking the BS automation? This would be problematic as these have dynamic IPs as well (currently).
Thanks in advance for any help!
EDIT/UPDATE: I managed to get this to work!! (Sort of) - it's just a bit slow. If I run the following command on my existing dedicated / static IP server:
BrowserStackLocal.exe --key {mykey} --force-local --verbose 3
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did (eventually) come back with my static IP machines address! The problem though is it was quite slow - 20-30 secs for that one site hit, so still looking at alternative solutions. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
UPDATE 2: Turning down the --verbose logging level on the local binary (or leaving that flag off completely) seemed to improve things - I am getting 5-10 sec response times now for each request. That might have to do. But this does work as described.
SOLUTION: I managed to get this to work - it's just a bit slow. If I run the following command on my existing dedicated / static IP server (note adding verbose logging seems to slow things down more, so no --verbose flag used now):
BrowserStackLocal.exe --key {mykey} --force-local
Then on another machine (like my dev laptop) if I hit the BS web driver server http://hub-cloud.browserstack.com/wd/hub, and access the site http://www.whatsmyip.org/ to see what IP address comes back, and it did come back with my static IP machines address. Note for this to work your test code must set the "local" browserstack capability flag to 'true' - eg for Node.js:
// Input capabilities
var capabilities = {
'browserstack.local' : 'true'
}
So while a little slow, that might have to do. But this does work as described.

Unable to access glassfish served content when using localhost

I created this simple dynamic web project (glassfish 4.1.1 latest atm) using eclipse java ee Mars.2 that I installed 2 days ago.
Checking on the admin, the app is deployed and running fine. I could not access the web app using the localhost:8080 url but it works when I use <computername>:8080.
I could access the admin using localhost:4848.
I tried disabling the firewall but the problem persists. What could be the problem?
The error is:
404 Not Found
No context found for request
In eclipse I see the log int he console that says: Automatic timeout occurred
As I pointed out in comments, you can configure listeners in Configuration -> needed configuration -> Network Config -> Network Listeners. However, it is still rather strange that your localhost doesn't work with 0.0.0.0 IP address, since it is a special address which means "listen on all available IPs on given port". Perhaps your network is somehow misconfigured.

How to restrict access to Selenium Standalone Server instance?

I have an instance of Selenium Standalone server in a virtual Widows box to run my tests on. It's being started in the following way:
java -jar selenium-server-standalone-2.46.0.jar -D"webdriver.chrome.driver"=chromedriver_2.13.exe -D"webdriver.ie.driver"=IEDriverServer_2.44.exe
Today I've noticed in the output some unexpected lines. The tests are running in the night and after that I see following:
08:46:20.197 INFO - Couldn't proxy to http://www.cv7.waw.pl/108258/Dachy/artykul.html because host not found
11:12:07.873 INFO - Couldn't proxy to http://g1nkaku.bieszczady.pl/damy-rade-zespol-na-wesele-bydgoszcz because host not found
11:49:49.204 INFO - Couldn't proxy to http://www.swiat.opt.waw.pl/Kryszyn/planeta-102-7/ because host not found
None of my tests were accessing any of such links, especially at these timestamps, and I have no idea where it can come from. My assumption is that someone found out the link to this instance of Selenium server and was sending requests through it.
What are my options to restrict access to Selenium server? Are there any options to request some custom login/password for all clients of this server? It's being used by several people in our team from multiple locations, so IP based checks are not an option.

How to run selenium chrome nodes using proxy?

I'm using Docker Selenium images to run browser nodes, repo is available here https://github.com/SeleniumHQ/docker-selenium. There is no documentation on how config.json can be used to provide proxy values.
I'm using Selenium version 2.44.0.
In my infrastructure, there are certain assets that are sourced from a location which needs proxy configuration on browser to access them. I'm trying to setup proxy on a chrome node. According to this documentation here, proxy can be set like following:
java -jar selenium-2.44.0.jar -Dhttp.proxyHost=192.168.2.10 -Dhttp.proxyPort=80
My proxy does not require, usename and password hence I have ignored those values.
What is not clearly mentioned on SeleniumHQ documentation is, whether it needs proxy configuration on both hub or nodes or just the nodes. I've tried different combinations but haven't worked for me.
Actual commands i'm running are:
For Hub:
java -jar /opt/selenium/selenium-server-standalone.jar -role hub -Dhttp.proxyHost=192.168.2.10 -Dhttp.proxyPort=80 -hubConfig /opt/selenium/hubconfig.json
When I run command above, I can see -D* values being displayed on console config.
For node:
xvfb-run --server-args=":99.0 -screen 0 1360x1020x24 -ac +extension RANDR" java -jar /opt/selenium/selenium-server-standalone.jar -Dhttp.proxyHost=192.168.2.10 -Dhttp.proxyPort=80 -role node -hub http://$HUB_PORT_4444_TCP_ADDR:$HUB_PORT_4444_TCP_PORT/grid/register -nodeConfig /opt/selenium/config.json
When I run this command I can see the proxy values on console again but I the assets are not loaded by the browser.
Also, on a side note it seems like this can be done on developers side (in java code) but I'm keen to solve it on my (operations) side.
Thanks - here is what we got:
First you need a way to verify your settings made it into the browser.
chrome://net-internals/proxyservice.config#proxy
The actual command line instruction is:
/chromeexec --proxy="http=http://proxyserver:port/;https=http://proxyserver:port/"
Note that the colons will blow up on the bash command line if you don't use double-quotes.
Now if you're sending this from the Webdriver Java code programmatically - you'll need to escape out the double quotes - so the proxy server setting in Java may look like:
org.openqa.selenium.Proxy proxy = new org.openqa.selenium.Proxy();
proxy.setHttpProxy("\"http://proxyserver:port/\"")
Alternatively you can pass this in as an execution parameter.
DesiredCapabilities capabilities = DesiredCapabilities.chrome();
capabilities.setCapability("chrome.switches", Arrays.asList("--proxy \"http=http://proxyserver:port/;https=http://proxyserver:port/\""));
WebDriver driver = new ChromeDriver(capabilities);
Now your origin question was about accessing external resources with the proxy. What we did (similar to your question) was to pass a proxy exception for the site we were hitting so the external resources would go via the proxy.
So then you add an exception for your primary website - assuming the resource is 10.1.10.5 then it looks like:
--proxy-bypass-list=10.1.10.5
Which then we do in code as:
capabilities.setCapability("chrome.switches", Arrays.asList("--proxy=\"http=http://proxyserver:port/;https=http://proxyserver:port/\"" "--proxy-bypass-list=10.1.10.5"));
Note that setting username and password is a bug in Chrome. (Please star it if this holds you up. )
If you need a username and password, then the solution is a PAC file.
The syntax is:
--proxy-pac-url=file:///proxy.pac
The file format looks like:
if (host == "mylocalserver.com")
{
return 'DIRECT';
} else {
return return "PROXY wcg2.example.com:8080 ";
}
For the case of usernames and passwords in proxy settings, note the following:
Proxy auto-configuration files do not support hard-coded usernames and passwords. There's good reasoning behind this too, since providing support for hard-coded credentials would open up significant security holes, as anybody would be able to easily view the required credentials to access the proxy.
Rather configure the proxy as a transparent proxy, that way you won't need a username and password. You mention in one of your comments that the proxy server is located outside your LAN, which is why you require authentication. However, most proxies support rules based on the source IP, in which case it's a simple matter of only allowing requests originating from your corporate network.
The original proxy auto-config specification was originally drafted by Netscape in 1996. The original specification is no longer available directly, but you can still access it using The Wayback Machine's archived copy. The specification hasn't changed much, and is still largely the same as it was originally. You'll see the specification is quite simple, and that there is no provision for hard-coded credentials.
To solve this problem - you can use this tool:
https://github.com/sjitech/proxy-login-automator
This tool can create a local proxy and automatically inject user/password to real proxy server. Support PAC script.