I have some Docker containers, that contains several OSes. So I would like to make reacheable (via SSH) these containers directly from the Internet. I can use up only one public IP address. Now there is docker0 in bridge mode with its default IP. How can I configure Docker to make accessible containers separately from everywhere?
You do this by mapping each of your containers ssh port to a different port on the public ip address.
Like:
$ docker run -d -p 22000:22 --name sshcontainer1 some_image
$ docker run -d -p 22001:22 --name sshcontainer2 some_image
$ docker run -d -p 22002:22 --name sshcontainer3 some_image
...
Then you communicate this port [to your customer]. Done.
The docker documentation has an example of setting an ssh server.
https://docs.docker.com/examples/running_ssh_service/
Related
I have tried with coturn configuration in my local system using my local IP address. It worked. But now, I'm trying to configure my public IP to avoid the ICE Framework. Is it possible? and I'm doing this using Cygwin, can I able to configure this in my system with my public IP address?
If using command line for configuration use, -L private_IP -X public_IP
Or, in config file, set
external-ip=public_ip/private_ip and use private IP as listening IP.
For aws system I am using this command
sudo turnserver -v -o -a -user username:key -f -L private_ip -X public_ip -E private_ip -min-port=minport_number -max-port=max_port_number -r public_ip --no-tls --no-dtls
I have a Debian server with apache2 on it. I can access it by an ip address.
What I want is to be able to access to the containers in it (which contain an apache2 serveur) from the outside by an url like "myIpAddress/container1". What I currently have is an acces to those containers only from the Debian server.
I thought about using proxy reverse, but I cannot make it works.
Thank you for your help! :-)
Map the docker container's port to a host port and access the docker container from <host-ip>:port.
docker run -p host-port:container-port image
For example, upon running a container using the above command will make the container available at 127.0.0.1
docker run -p 80:5000 training/webapp
Update:
Setting up reverse proxy using NGINX
This example uses a plain NGINX container as site A and plain Apache server as site B.
Run the reverse proxy.
docker run -d \
--name nginx-proxy \
-p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Start the container for site A, specifying the domain name in the VIRTUAL_HOST variable.
docker run -d --name site-a -e VIRTUAL_HOST=a.example.com nginx
Check out your website at http://a.example.com.
With site A still running, start the container for site B.
docker run -d --name site-b -e VIRTUAL_HOST=b.example.com httpd
Check out site B at http://b.example.com.
Note: Make sure you have set up DNS to forward the subdomains to the host running nginx-proxy. If you're using AWS, the easiest way is to use Route53.
For testing locally, map sub-domains to resolve to localhost by adding entries in /etc/hosts file.
127.0.0.1 a.example.com
127.0.0.1 b.example.com
References
jwilder NGNIX Proxy Github
NGNIX reverse proxy using docker
I am not able to access my container which is running a “dockerized” ipython notebook application. The host is a CentOS7 running in Google Cloud.
Here is the details of the environment:
Host: CentOS7/Apache Webserver running for example on IP address: 123.4.567.890 (Port 80 is Listening)
Docker container: An Jupyter Notebook application – the container is called for example APP-PN and can be accessed via the port: 8888 in docker.
It I run the application at my local server I can access the notebook application via the browser:
http://localhost:8888/files/dir1/app.html
However, when I run the application on the Google Cloud if I put:
http://123.4.567.890:8888/files/dir1/app.html
I cannot access it.
I tried all combinations open the port 8888 via TCP on the host as well as to expose the port via the docker run command – all of which did not work:
firewall-cmd --zone=public --add-port=8888/tcp --permanent
docker run -it -p 80:8888 APP-PN
docker run --expose 8888 -it -p 80:8888 APP-PN
Also I tried to change Apache to Listen to port 80 and 8888 but I got some errors.
However if I STOP the Apache Webserver and then run the command
docker run -it -p 80:8888 APP-PN
I can access the application simply in my browser via:
htttp://123.4.567.890/files/dir1/app.html
HERE is my question: I do not want to STOP my Apache Webserver and at the same time I want to access my docker container via the external port 8888.
Thanks in advance for all the help.
I didn't see in your examples a
docker run -it -p 8888:8888 APP-PN
The -p argument describes first the host port to listen on and then the container port to route to. If you want the host to listen on the same port as the container, -p 8888:8888 will get it done.
I have downloaded the Odoo container and I want to docker run it inside my server and gain access from outside. This means I want to run the container in localhost:8069 and gain access from :8000 (8000 is an open port and apache2 serves from it). Is this possible?
To allow Dockerized services to be access from outside you can use the option --publish of the command docker run
From the man page:
-p, --publish=[]
Publish a container's port, or range of ports, to the host.
Format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort | containerPort Both hostPort and containerPort can be specified as a range of ports. When specifying ranges for both, the number of container ports in the range must match the number of host ports in the range. (e.g., docker run -p 1234-1236:1222-1224 --name thisWorks -t busybox but not docker run -p 1230-1236:1230-1240 --name RangeContainerPortsBiggerThanRangeHostPorts -t busybox) With ip: docker run -p 127.0.0.1:$HOSTPORT:$CONTAINERPORT --name CONTAINER -t someimage Use docker port to see the actual mapping: docker port CONTAINER $CONTAINERPORT
Then running: docker run -p 1.2.3.4:8000:80 image-name will bind the socket 1.2.3.4:8000 of the server to the port 80 of the container.
I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143