Is there any way to change the default port of selenoid-ui from 8080 to other ports - selenoid

Is there any way we can change default port os selenoid-ui from 8080 to some other port? I've tried as below in yml file but no success. With this configuration selenoid-ui neither works with 8080 nor 8181,
selenoid-ui:
image: "aerokube/selenoid-ui"
network_mode: bridge
links:
- selenoid
command: ["--selenoid-uri", "http://selenoid:4444"]
command: ["--listen",":8081"]
I have read in few posts about using cm tool to start selenoid-ui with different port. But is it possible to make it in docker-compose yml file?
Thanks in advance.

Selenoid UI is just a regular web-service by default listening on port 8080. Having said that you have several options:
1) When running as a binary simply use -listen flag as follows:
$ ./selenoid-ui -listen :8081
2) When running as Docker container it is better to use port mapping:
$ docker run -d --name selenoid-ui -p 8081:8080 aerokube/selenoid-ui:latest-release

Related

How to access my docker container (Notebook) over the Internet. My host is running on Google Cloud

I am not able to access my container which is running a “dockerized” ipython notebook application. The host is a CentOS7 running in Google Cloud.
Here is the details of the environment:
Host: CentOS7/Apache Webserver running for example on IP address: 123.4.567.890 (Port 80 is Listening)
Docker container: An Jupyter Notebook application – the container is called for example APP-PN and can be accessed via the port: 8888 in docker.
It I run the application at my local server I can access the notebook application via the browser:
http://localhost:8888/files/dir1/app.html
However, when I run the application on the Google Cloud if I put:
http://123.4.567.890:8888/files/dir1/app.html
I cannot access it.
I tried all combinations open the port 8888 via TCP on the host as well as to expose the port via the docker run command – all of which did not work:
firewall-cmd --zone=public --add-port=8888/tcp --permanent
docker run -it -p 80:8888 APP-PN
docker run --expose 8888 -it -p 80:8888 APP-PN
Also I tried to change Apache to Listen to port 80 and 8888 but I got some errors.
However if I STOP the Apache Webserver and then run the command
docker run -it -p 80:8888 APP-PN
I can access the application simply in my browser via:
htttp://123.4.567.890/files/dir1/app.html
HERE is my question: I do not want to STOP my Apache Webserver and at the same time I want to access my docker container via the external port 8888.
Thanks in advance for all the help.
I didn't see in your examples a
docker run -it -p 8888:8888 APP-PN
The -p argument describes first the host port to listen on and then the container port to route to. If you want the host to listen on the same port as the container, -p 8888:8888 will get it done.

Many docker container on one host

I didn't find something about running many different webapp-container on one host. So for example I have two containers. On the first I run an apache with owncloud and on the second I run a wordpress blog. Both of them have to run on port 80. How could I handle this?
Thanks
You can use -p flag to map ports:
docker run -p 8080:80 owncloud
docker run -p 8081:80 wordpress
And than access owncloud with http://yourdomain.com:8080/ and wordpress with http://yourdomain.com:8081/
It is common to combine docker with a reverse proxy like HAProxy.
With a reverse proxy you can pass request to owncloud.yourdomain.com to your owncloud container and from wordpress.yourdomain.com to the wordpress container. (or yourdomain.com/owncloud and yourdomain.com/wordpress)
You will have to use different ports in the host (otherwise you will get an error starting the second container).
To avoid that, expose one of the 80 internal port to another port in the host.
For instance, when running 'docker run':
docker run -p 8081:80 name_of_your_image
This will export the port 80 of your server in the port 8081 in the host.
if you want you can use docker-gen, it's a simple script where you can balance the docker with a simple environment variables (on container).
This is the documentation:
https://github.com/jwilder/docker-gen

Connect Docker Container port to server's apache port (odoo Container)

I have downloaded the Odoo container and I want to docker run it inside my server and gain access from outside. This means I want to run the container in localhost:8069 and gain access from :8000 (8000 is an open port and apache2 serves from it). Is this possible?
To allow Dockerized services to be access from outside you can use the option --publish of the command docker run
From the man page:
-p, --publish=[]
Publish a container's port, or range of ports, to the host.
Format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort Both hostPort and containerPort can be specified as a range of ports. When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., docker run -p 1234-1236:1222-1224 --name thisWorks -t busybox but not docker run -p 1230-1236:1230-1240 --name RangeContainerPortsBiggerThanRangeHostPorts -t busybox) With ip: docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage Use docker port to see the actual mapping: docker port CONTAINER $CONTAINERPORT
Then running: docker run -p 1.2.3.4:8000:80 image-name will bind the socket 1.2.3.4:8000 of the server to the port 80 of the container.

Expose and publish a port with specified host port number inside Dockerfile

Suppose for example that I want to make an SSH host in Docker. I understand that I can EXPOSE 22 inside Dockerfile. I also understand that I can use -p 22222:22 so I can SSH into that Docker container from another physical machine on my LAN on port 22222 as ssh my_username#docker_host_ip -p 22222:22. But suppose that I'm so lazy that I can't be bothered to docker run the container with the option -p 22222:22 every time. Is there a way that the option -p 22222:22 can be automated in a config file somewhere? InDockerfile` maybe?
You can use docker compose
You can defind listening port in docker-compose.yml file as below:
version: '2'
services:
web:
image: ubuntu
ssh_service:
build: .
command: ssh ....
volumes:
- .:/code
ports:
- "22222:22"
depends_on:
- web

Connect from one Docker container to another

I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143