In the official documentation of selenium docker setup, I see a config.toml file which contains below info
[docker]
# Configs have a mapping between the Docker image to use and the capabilities that need to be matched to
# start a container with the given image.
configs = [
"selenium/standalone-firefox:4.3.0-20220706", "{\"browserName\": \"firefox\"}",
"selenium/standalone-chrome:4.3.0-20220706", "{\"browserName\": \"chrome\"}",
"selenium/standalone-edge:4.3.0-20220706", "{\"browserName\": \"MicrosoftEdge\"}"
]
# URL for connecting to the docker daemon
# Most simple approach, leave it as http://127.0.0.1:2375, and mount /var/run/docker.sock.
# 127.0.0.1 is used because interally the container uses socat when /var/run/docker.sock is mounted
# If var/run/docker.sock is not mounted:
# Windows: make sure Docker Desktop exposes the daemon via tcp, and use http://host.docker.internal:2375.
# macOS: install socat and run the following command, socat -4 TCP-LISTEN:2375,fork UNIX-CONNECT:/var/run/docker.sock,
# then use http://host.docker.internal:2375.
# Linux: varies from machine to machine, please mount /var/run/docker.sock. If this does not work, please create an issue.
url = "http://127.0.0.1:2375"
# Docker image used for video recording
video-image = "selenium/video:ffmpeg-4.3.1-20220706"
# Uncomment the following section if you are running the node on a separate VM
# Fill out the placeholders with appropriate values
[server]
host = <ip-from-node-machine>
port = <port-from-node-machine>
What does the bottom two parameters represent host and port?
FYI- I am planning to run the hub container in one VM and nodes containers in another VM's.
Correct me if I am wrong, I am guessing config.toml file should be present in the VM's where we would be running the nodes
So, for host= should we need to give Ip of where hub is up and running?
and
for port= where we get the port number?
Expecting answers ASAP, thanks in advance
Yes, the host and port values are the details of where your Hub is running. Port number is 4444 if your hub is running on the default port.
Related
I'm running to APIs through docker-compose on linux. I tried to pass them IPs that docker containers have, i checked with: docker inspect . They are on the same (docker)network. Should this work, and I'm mistaken, or is there simpler way to set each their address in some easy way. I went through docker docs, but nothing seems to resolve the problem.
Whenever you start docker containers and expose ports from each container, the default IP would be localhost or 0.0.0.0
So, containers can communicate via: localhost:<port_of_other_container>
If it doesn't work try with ifconfig -> en0 -> inet address instead of localhost.
I didn't find something about running many different webapp-container on one host. So for example I have two containers. On the first I run an apache with owncloud and on the second I run a wordpress blog. Both of them have to run on port 80. How could I handle this?
Thanks
You can use -p flag to map ports:
docker run -p 8080:80 owncloud
docker run -p 8081:80 wordpress
And than access owncloud with http://yourdomain.com:8080/ and wordpress with http://yourdomain.com:8081/
It is common to combine docker with a reverse proxy like HAProxy.
With a reverse proxy you can pass request to owncloud.yourdomain.com to your owncloud container and from wordpress.yourdomain.com to the wordpress container. (or yourdomain.com/owncloud and yourdomain.com/wordpress)
You will have to use different ports in the host (otherwise you will get an error starting the second container).
To avoid that, expose one of the 80 internal port to another port in the host.
For instance, when running 'docker run':
docker run -p 8081:80 name_of_your_image
This will export the port 80 of your server in the port 8081 in the host.
if you want you can use docker-gen, it's a simple script where you can balance the docker with a simple environment variables (on container).
This is the documentation:
https://github.com/jwilder/docker-gen
I have server with debian operating system. I installed docker on it and it works fine, as you can see as follow:
root#3053b0461a3c:/# which wget
/usr/bin/wget
root#3053b0461a3c:/#
An ubuntu based container is running.
Then I started a second terminal, connect via ssh to server and type in console
docker ps
But as output I've got the message:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Why the docker service is not running?
Unset the environment variable DOCKER_HOST and it should work.
https://github.com/docker/docker/blob/eff810aed688879f67a3730c41d9adce4637470f/docs/installation/ubuntulinux.md
Try unset DOCKER_HOST
In most unix-based (or -like) environments that I've seen there is the concept of environment variables, which can be considered as dynamic configuration. The 2 functions available are:
set which sets to an environment variable a special value
unset which removes an environment variable.
On the case of DOCKER_HOST, docker uses this variable to know whether it should attach to a network host, e.g. tcp://192.137.23.11 or to a local Unix socket.
I am trying to execute the container named redis which is running right now.But the error Could not connect to Redis at redis:6379: Name or service not known. Any one please hell me to figure out the issue and fix it.
This is because both the containers are not in same network, Add a network property inside service name and make sure its same for both
redis:
networks:
- redis-net
Naming the container doesn't alter your hosts file or DNS, and depending on how you ran the container it may not be accessible via the standard port as Docker does port translation.
Run docker inspect redis and examine the ports output, it will tell you what port it is accessible on as well as the IP. Note, however, that this will only be connectable over that IP from that host. To access it from off of the host you will need to use the port from the above command and the host's IP address. That assumes your local firewall rules allow it, which are beyond the scope of this site.
Try below command
src/redis-cli -h localhost -p 6379
I'm "dockerizing" an app which does UDP broadcast heartbeating on a known port. This is with docker-engine-1.7.0 on a variety of hosts (Fedora, Centos7, SLES 12).
I notice that the 'docker0' bridge on the docker host and 'eth0' inside the container each have a broadcast address of 0.0.0.0.
Assuming admin privilege on the host I can manually set the broadcast address on docker0. Likewise in the container (if the container is running privileged or with NET_ADMIN, NET_BROADCAST), but I'm curious why the broadcast address isn't set by default. Is there a configuration option I'm missing for Docker to do this automatically?
Host:
# ifconfig docker0 broadcast 172.17.255.255 up
# tcpdump -i docker0 -p 5000
Container:
# ifconfig eth0 broadcast 172.17.255.255 up
# echo "Hello world" | socat - UDP-DATAGRAM:172.17.255.255:5000,broadcast
Broadcast from the host to the container also works once the broadcast addresses are set.
if you are passing NET_ADMIN to the Docker container, I would not use the docker0 network at all for your application.
If I understood correctly what you are trying to do, the UDP broadcast heartbeating on a known port is used by Docker containers that belong to different hosts to find each other, and not by different docker containers in the same host.
I would then recommend to use --net=host:
docker run --net=host --cap-add NET_ADMIN ....
Like this if you get a shell into the docker container, you will see that the network environment is exactly the same one of the host that is running the containers. If your application was running on that server earlier using UDP broadcast, it will work exactly in the same way in the docker container.