Bind selenium to a specific IP - Possible? - selenium

Like many, we start selenium server via the following command:
java -jar selenium-server-standalone-2.21.0.jar
What we found is that this opens selenium up on 0.0.0.0:4444
Started SocketListener on 0.0.0.0:4444
[USER # BOX ~]# netstat -na | grep LISTEN | grep 4444
tcp 0 0 :::4444 :::* LISTEN
Is there any way to bind selenium to a specific ip (localhost)?
Thanks.

Use the following command
java -jar selenium-server-standalone-2.21.0.jar -host 192.168.1.100
where 192.168.1.100 is the IP address of the host

This is not the correct way of handling this problem but its a way
So what this will do is just drop any connection on port 4444 from any outside source. You can test this by first going to page
start server like this
java -jar selenium-server-standalone-2.39.0.jar -host 127.0.0.1 -port 4444
verify everything is working
http://yourexternalip:4444/wd/hub/
the page will load. if your server is running properly.
Dispatch the commands
sudo iptables -A INPUT -p tcp --dport 4444 -s 127.0.0.1 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 4444 -j DROP
then reload the page. the webpage will no longer be accessible (because you are accessing from external IP)
your new accessible URL is now
http://127.0.0.1:4444/wd/hub/
which should be working
Again this is more of a band-aid to a greater problem and doing this will not force you to change any source code and still keeping a secure system

This will be possible by adding the "-host 192.168.1.100" parameter, provided you have this fix in your version:
https://code.google.com/p/selenium/source/detail?r=71c5e231f442
(That fix isn't included in the available binaries at the time of writing so you will have to build your own from source.)

I was also facing the same problem with the Hub. So my Hub is pointing toward some other IP address when I tried to UP the hub, but when I check my IP address it was different on my local system. To overcome the problem I just tried the following code and it works.
java -jar selenium-server-standalone-3.12.0.jar -host 192.XXX.X.XX -role hub
And my hub was registered to my local machine IP address.

You could run java -jar selenium-server-standalone-2.21.0.jar on a remote machine
and then in your selenium scripts define your webdriver to run remotely.
In ruby you could do it this way
#driver = Selenium::WebDriver.for(:remote, :url => "http://specific_ip_of_remotemachine:4444", :desired_capabilities => firefox)
is this what you are looking for?

Related

Is there any way to change the default port of selenoid-ui from 8080 to other ports

Is there any way we can change default port os selenoid-ui from 8080 to some other port? I've tried as below in yml file but no success. With this configuration selenoid-ui neither works with 8080 nor 8181,
selenoid-ui:
image: "aerokube/selenoid-ui"
network_mode: bridge
links:
- selenoid
command: ["--selenoid-uri", "http://selenoid:4444"]
command: ["--listen",":8081"]
I have read in few posts about using cm tool to start selenoid-ui with different port. But is it possible to make it in docker-compose yml file?
Thanks in advance.
Selenoid UI is just a regular web-service by default listening on port 8080. Having said that you have several options:
1) When running as a binary simply use -listen flag as follows:
$ ./selenoid-ui -listen :8081
2) When running as Docker container it is better to use port mapping:
$ docker run -d --name selenoid-ui -p 8081:8080 aerokube/selenoid-ui:latest-release

Specify Selenium Firefox Node host as a variable inside a Docker container

I'm trying to set up Selenium in Docker Swarm. It's a standard setup, so hub + replicated Firefox nodes. Since I'm using different networks for different components of the Swarm I've encountered a problem with networking.
Although Firefox Node IP is let's say 10.0.1.19 it gets reported to the Selenium Hub as 172.19.0.4. Hub cannot connect to this IP since it's outside the network created for Selenium and node gets timeout.
I found out I can set host, port, and remoteHost arguments of Firefox containers but since everything is dynamic I cannot hardcode those values. Therefore I thought about doing something like this in my docker-compose.yml file inside Firefox Node definition:
environment:
- SE_OPTS="-host $$HOSTNAME -port 5555 -remoteHost http://$$HOSTNAME:5555"
If $HOSTNAME variable could be used this would solve my problem immediately. Unfortunately while checking Hub logs I see:
java.security.InvalidParameterException: Error: Not a correct url to register a remote : http://$HOSTNAME:5555"
Apparently the argument is not changed to its value before sending it to the hub. I'd like to send the right IP or hostname of the Firefox node. Any ideas?
The solution was editing Firefox Docker entrypoint file and manually adding
export MYIP="$(cat /etc/hosts | grep $HOSTNAME | sed 's/\s.*$//' | tr -d '\n')"
REMOTE_HOST="http://$MYIP:5555"
REMOTE_HOST_PARAM="-remoteHost http://$MYIP:5555"
This way the node always sends its correct IP based on the IP found inside /etc/hosts

How do I connect to a localhost site using Selenium Docker image?

I have a node application that I can start with node server.js and access on localhost:9000.
I have a series of e2e tests on selenium that run fine, but I am now looking to use the docker selenium image.
I start the docker image with docker run -d -p 4444:4444 selenium/standalone-chrome
and I changed my e2e test code to look like:
var driver = new webdriver.Builder().
usingServer('http://127.0.0.1:4444/wd/hub').
withCapabilities(webdriver.Capabilities.chrome()).
build();
// driver.manage().window().setSize(1600, 1000);
return driver.get('http://127.0.0.1:9000')
.then(function() {
// driver.executeScript('localStorage.clear();')
return driver
});
But selenium fails to connect to the app at all!
(If I uncomment the setSize line, the program fails right there)
I have the server up an running, and it's indeed accessible at localhost:9000. How can I get my test to properly use dockerized selenium, and properly point to a server on localhost?
If you want your container network behaviour to be like your host machines use docker run --network=host
From the host machine, Docker endpoints aren't accessible at localhost. Did you try using 0.0.0.0 instead of 127.0.0.1?
If you are using mac, you may try to get gateway from netstat inside docker image:
netstat -nr | grep '^0\.0\.0\.0' | awk '{print $2}'
or write ifconfig from terminal, and get the inet address, try with that instead of 127.0.0.1.
What is docker ps command is returning for this container? Is it display like "0.0.0.0:4444->4444/tcp". ?
You can run sudo iptables -L -n and verify under "Chain DOCKER" section below line should come.
ACCEPT tcp -- 0.0.0.0/0 x.x.x.x tcp dpt:4444
Just to make sure I understand - the selenium runs in the docker, and tries to access the node app that runs on the server?
In this case, these are two different "servers", so you need to use real IP addresses (or dns names)
pass the ip of the server as a parameter to the dockerized selenium image the simplest thing would probably be as an environment variable

How to access my docker container (Notebook) over the Internet. My host is running on Google Cloud

I am not able to access my container which is running a “dockerized” ipython notebook application. The host is a CentOS7 running in Google Cloud.
Here is the details of the environment:
Host: CentOS7/Apache Webserver running for example on IP address: 123.4.567.890 (Port 80 is Listening)
Docker container: An Jupyter Notebook application – the container is called for example APP-PN and can be accessed via the port: 8888 in docker.
It I run the application at my local server I can access the notebook application via the browser:
http://localhost:8888/files/dir1/app.html
However, when I run the application on the Google Cloud if I put:
http://123.4.567.890:8888/files/dir1/app.html
I cannot access it.
I tried all combinations open the port 8888 via TCP on the host as well as to expose the port via the docker run command – all of which did not work:
firewall-cmd --zone=public --add-port=8888/tcp --permanent
docker run -it -p 80:8888 APP-PN
docker run --expose 8888 -it -p 80:8888 APP-PN
Also I tried to change Apache to Listen to port 80 and 8888 but I got some errors.
However if I STOP the Apache Webserver and then run the command
docker run -it -p 80:8888 APP-PN
I can access the application simply in my browser via:
htttp://123.4.567.890/files/dir1/app.html
HERE is my question: I do not want to STOP my Apache Webserver and at the same time I want to access my docker container via the external port 8888.
Thanks in advance for all the help.
I didn't see in your examples a
docker run -it -p 8888:8888 APP-PN
The -p argument describes first the host port to listen on and then the container port to route to. If you want the host to listen on the same port as the container, -p 8888:8888 will get it done.

can't access apache on docker from my localhost

I've been following this tutorial for beginners about docker which basically instructs you to create an apache container and map a localhost port to the one on the container.
when I try localhost:80 it doesn't connect, although the container is up and running.
I even made a rule in the firewall to allow connection to port 80, but couldn't get connected to the localhost.
Any ideas ?
On Windows/OS X, Docker is running inside a Linux virtual machine (Docker Toolbox) with a default IP address of 192.168.99.100. Thus, when you use docker run -p 80:80 to bind the container port to host port, it in fact binds to the virtual machine's port 80. Thus the address you need is http://192.168.99.100.
The 172.17.0.3 address is the address of the docker container inside that virtual machine, and is not accessible directly from Windows/OS X.
Add a line to your DockerFile before restarting apache.
RUN echo 'ServerName localhost' >> /etc/apache2/apache2.conf
I stumbled upon this question as I was looking for a way to bind my local HTTP port (80) to the HTTP port of my container, an Apache container running on Docker Desktop for Windows - through WSL2 (this is important)
I couldn't find a quick and easy way to do this, so I figured it out myself.
What you must do is bind your local port (on Windows) to the port on WSL.
Here is how I did it :
$wsl_ip = (wsl -d "docker-desktop" -- "ifconfig" "eth0" "|" "grep" "inet addr:").trim("").split(":").split()[2]
netsh interface portproxy add v4tov4 listenport=443 listenaddress=0.0.0.0 connectport=443 connectaddress=$wsl_ip
netsh interface portproxy add v4tov4 listenport=80 listenaddress=0.0.0.0 connectport=80 connectaddress=$wsl_ip
You can either create a Powershell Script (.ps1) and run it with Powershell, or copy/paste each command line into Windows Terminal / Powershell running with Administrator Privileges.
What this does is :
attach to the "docker-desktop" distribution running in WSL2 2
run "ifconfig eth0 | grep inet addr:" to get the local IP address of
the "virtual machine"
parse the result, and use Netsh to
create a portproxy between port 80 of your Windows machine and port
80 of your Linux machine. Same is done for port 443. You can easily
map other ports if you understand what the command is doing.
More explanation :
Since Docker for Windows 10/11 uses WSL2, when you expose a port (through docker-compose or with an EXPOSE command in your Dockerfile), it is exposed to a Linux Distribution called "docker-desktop" that is ran with WSL2. For some reason, ports 80 and 443 that are exposed from a container are NOT forwarded to the host.
The official documentation acknoledges some issues but their solution is just to use another port (for example, 8080 mapped to 80).
Issues with this method :
Each time you reboot your system (or WSL2), the Linux machine gets assigned a new IP and you have to do it again. What you could do is setup a command to run when your container starts that connects through ssh to the host and runs the script, but I'm too lazy to have done it myself.