As shown in this diag:
All the connections from the Selenium Tests(client) should go directly to Selenium HUB, then it will forward the request to an appropriate Node, and return the response.
But what i am observing, that after finding an appropriate Node, the client is trying to communicate directly to the Node.
But in case, the nodes are in a private network and are accesible only by the Selenium HUB and NOT ACCESSIBLE by the Selenium Tests(client) then the subsequent calls fails.
Any idea on how to force all the subsequent calls through the Selenium HUB only?
EDIT
The problem might be something different. My hub is running on 192.168.0.100(with another ip as 10.0.0.2).
So when i am connecting to 192.168.0.100 from my .Net RemoteWebDriverClient, after connecting to the appropriate node, it is using the another ip of the client(10.0.0.2) which is not accessible from my system.
The answer is NO, it doesn't. The Grid remain active throughout the connection.
The ip 10.0.0.2 was of the same selenium HUB machine only. The .net & java implementations of selenium RemoteWebDriver clients were switching to the location header parameter after the initial handshake. This is may be due to the .Net and Java HTTPClient implementations.
Related
I have seen many questions about using Selenium behind proxy where selenium nodes are connecting to internet via proxy. The solution is indicated everywhere is to specify proxy settings in the code when creating the webdriver instance.
Unfortunately in my case this is not going to work, as I am using a distributed selenium grid where different nodes require different proxy settings. When a test is run, the test running only communicates with the grid hub and does not have any control over what node it will run over - thus setting proxy from inside the test is not possible. Each node is a linux machine with both Firefox and Chrome running in virtual framebuffer. Presently the grid has about 25 nodes distributed across multiple data centers, but this number may grow to anywhere up to 1000 in the future.
There are business reasons for such a setup - and I am not in a position (both technically and politically) to change them.
Is there any way to set proxy on a node level and have it apply to everything that's happening on that node only?
Apparently, all I need to do is to define http_proxy and https_proxy environment variables, which chrome will then honour.
For firefox, proxy parameters can be added to /etc/firefox-$version/pref/firefox.js where $version can be determined by running firefox -v | awk '{print substr($3,1,3)}'.
If a browser has the node runtime, doesn’t that also contain express and socket.io libraries? And if so, then can’t you instantiate an http server within the browser context itself?
I mean - does a socket in a browser always act as a ‘client’ - and communicate with a backend server?
Thanks
AV
The Chrome browser uses the V8 engine to compile and run JavaScript. Node.js uses the same V8 engine. But that doesn't mean that the browser has node, nor does it include some of the modules like http which are necessary to run an express server.
If a browser has the node runtime
That is not what a browser has. It does NOT have the node runtime.
The Chrome browser uses the V8 engine to run Javascript. That is not a runtime library. That is just the Javascript interpreter that makes the raw language run. The browser then adds a library of stuff that is specific to the browser such as a DOM library and various interfaces that are browser specific such as XMLHttpRequest and others.
node.js also uses the same V8 engine for raw language support. But, then node.js adds its own runtime library (that's where the http library is) and those libraries are not in the browser in any way.
And if so, then can’t you instantiate an http server within the browser context itself?
No, you cannot.
I mean - does a socket in a browser always act as a ‘client’ - and communicate with a backend server?
Browsers have two main ways of communicating with an outside server. They can make an http request (often called an Ajax request in the context of the browser). Or, they can make a webSocket connection to another server and exchange messages over the webSocket. The browser would always be the client. It would initiate the connection to some server. There is no way for an outside agent to "connect to a user's browser". Instead, the browser has to connect to the outside agent.
I have a squid proxy container on my local Docker for Mac (datadog/squid image). Essentially I use this proxy so that app containers on my local docker and the browser pod (Selenium) on another host use the same network for testing (so that the remote browser can access the app host). But with my current setup, when I run my tests the browser starts up on the remote host and then after a bit fails the test. The message on the browser is ERR_PROXY_CONNECTION_FAILED right before it closes. So I assume that there is an issue with my squid proxy config. I use the default config and on the docker hub site it says
Please note that the stock configuration available with the container is set for local access, you may need to tweak it if your network scenario is different.
I'm not really sure how my network scenario is different. What should I be looking into for more information? Thanks!
I am running selenium grid in a secured private network. I want to receive request to hub in https protocol instead of http.
Simply,
Grid console: https://X.XX.XX.X:4444/grid/console
Grid : https://X.XX.XX.X:4444/wd/hub
I got to know about Ngnix server for receiving side and serve the request to hub and also reverse proxy for running hub in behind, but no idea how exactly to be done.
If anyone did in above or in other way. Please let me know.
How does selenium impose security in remote grid calls?
I have a test service in one machine. The grid is in another remote machine. This is simple buy a product flow:
1) Test service invokes my website on remote grid browser.
2) For the credit card field in checkout , the service sends the credit card data that is stored securely in it to the remote browser.
3) The service then calls the submit button on the remote browser to submit the data to the website.
How does selenium handle the data flow in step 2? Is there any way to encrypt the data in transit?
Selenium is a browser automation library. It does not have any capability of encrypting anything anywhere anytime. If you need encryption, there are other libraries in the Java world (or whichever binding you are using) that accomplish that task.
As for in-transit encryption for communication between node and hub, that is entirely up to the communication channel. Again, Selenium does not encrypt anything. There are various networking solution for securing communication traffic.
Lastly. Selenium is generally used in a testing situation, where you are hopefully talking to test servers and never using live data with real information (such as live user passwords, live credit cards, etc.) In this case, there is no need to encrypt any of this made up data, because even if it leaks out, it will be meaningless in the real world.
You can setup a ssh tunnel to local port on your host and encrypt connection to nodes with ssh this way.