I installed docker to my ubuntu 14.04 laptop. I pulled docker registry image from the central registry. To fix IP address of the container to a static value, I first changed my /etc/defaults/docker and added -e lxc to DOCKER_OPTS variable.
Then to run my local registry I used the following command;
docker run \
-i -t -h myreg \
--net="none" \
--lxc-conf="lxc.network.hwaddr=91:21:de:b0:6b:61" \
--lxc-conf="lxc.network.type = veth" \
--lxc-conf="lxc.network.ipv4 = 172.17.0.20/16" \
--lxc-conf="lxc.network.ipv4.gateway = 172.17.42.1" \
--lxc-conf="lxc.network.link = docker0" \
--lxc-conf="lxc.network.name = eth0" \
--lxc-conf="lxc.network.flags = up" \
--name myreg \
-p 5000:5000 \
-d registry \
/bin/bash
Then used docker attach myreg to access to the shell of the container. After installing net-tools package, I checked the IP address of it and see that it is 172.17.0.20 as expected. I tried to ping it from my host and it was replying.
The problem is that, when I checked the configuration of this container with docker inspect myreg, the NetworkSettings part of output was as the following
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.8",
"IPPrefixLen": 16,
"PortMapping": null,
"Ports": {
"5000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5000"
}
]
}
It was showing 172.17.0.8 as the IP address of it.It is the value that should be assigned if I was not usign lxc driver. This is becoming a problem when I use docker push command to push a tagged image to this local registry. Because,docker is using this wrong IP to push image, and throws an error log as the following
de7e1cfc] +job push(127.0.0.1:5000/mongo)
2014/07/18 17:10:19 Can't forward traffic to backend tcp/172.17.0.8:5000: dial tcp 172.17.0.8:5000: no route to host
2014/07/18 17:10:22 Can't forward traffic to backend tcp/172.17.0.8:5000: dial tcp 172.17.0.8:5000: no route to host
What is the problem here? Or am I doing smt. wrong?
What version of Docker are you running? Docker 1.0 no longer uses LXC, they have replaced it with their own libcontainer. The LXC commands didn't work for me when following this blog - http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/#_set_up
If you downgrade to 0.7 and follow the lxc process, it will work.
Related
I am on artifactory version 4.6 and have the following requirement on the docker registry.
Allow anonymous pulls on docker repository
Force authentication on the SAME docker repository
I know this is avaliable out of the box on the later versions of artifactory. However upgrading isnt an option for us for a while.
Does the following work around work?
Create a virtual docker repository on port 8443 and don't force authentication , call it docker-virtual
Create a local docker repository and force authentication, call it docker-local on port 8444
Configure 'docker-virtual' with the default deployment directory as 'docker-local'
docker pull docker-virtual should work
docker push docker-virtual should ask for credentials
Upon failure , I should be able to docker login docker-virtual
and docker push docker-virtual/myImage
Not sure about the artifactory side, but perhaps the following Docker advice helps.
You can start run two registries, one RW with authentication, and a second RO without any authentication, in Docker:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=My Registry" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
docker run -d -p 5001:5000 --restart=always --name registry-ro \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry:ro \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
Note the volume settings for /var/lib/registry in each container. Then to pull from the anonymous registry, you'd just need to change the port. Since the filesystem is RO, any attempt to push to 5001 will fail.
The closest thing you can achieve is failing on docker push without credentials (while succeeding with pull).
No idea if this works with artifactory sorry.... you could try this handy project for docker registry auth.
Configure the registry to use this https://hub.docker.com/r/cesanta/docker_auth/
# registry config.yml
...
auth:
token:
# can be the same as your docker registry if you use nginx to proxy /auth to docker_auth
# https://docs.docker.com/registry/recipes/nginx/
realm: "example.com:5001/auth"
service: "Docker registry"
issuer: "Docker Registry auth server"
rootcertbundle: /certs/domain.crt
And allow anonymous with the corresponding ACL
# cesanta/docker_auth auth_config.yml
...
users:
# Password is specified as a BCrypt hash. Use htpasswd -B to generate.
"admin":
password: "$2y$05$LO.vzwpWC5LZGqThvEfznu8qhb5SGqvBSWY1J3yZ4AxtMRZ3kN5jC" # badmin
"": {} # Allow anonymous (no "docker login") access.
ldap_auth:
# See: https://github.com/cesanta/docker_auth/blob/master/examples/ldap_auth.yml
acl:
# See https://github.com/cesanta/docker_auth/blob/master/examples/reference.yml#L178
- match: {account: "/.+/"}
actions: ["*"]
comment: "Logged in users do anything."
- match: {account: ""}
actions: ["pull"]
comment: "Anonymous users can pull anything."
# Access is denied by default.
I'm a newbie at docker. I'm creating a Hello, World example. All I'm trying to do is bring up Apache in a docker and then view the default website from the host machine.
Dockerfile
FROM centos:latest
RUN yum install epel-release -y
RUN yum install wget -y
RUN yum install httpd -y
EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
And then I build it:
> docker build .
And then I tag it:
docker tag 17283f566320 my:apache
And then I run it:
> docker run -p 80:9191 my:apache
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
It then runs....
In another terminal window, I attempt to issue the curl command to view the default web site.
> curl -XGET http://0.0.0.0:9191
curl: (7) Failed to connect to 0.0.0.0 port 9191: Connection refused
> curl -XGET http://localhost:9191
curl: (7) Failed to connect to localhost port 9191: Connection refused
> curl -XGET http://127.0.0.1:9191
curl: (7) Failed to connect to 127.0.0.1 port 9191: Connection refused
or I try localhost
Just to make sure that I got the port correct, I run this:
> docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aed4063b1f6 my:apachep "/usr/sbin/httpd -D F" 43 seconds ago Up 42 seconds 80/tcp, 0.0.0.0:80->9191/tcp angry_hodgkin
Thanks to all. My ports were reversed:
> docker run -p 9191:80 my:apache
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command to view the default web site on your apache web server inside the container
curl http://192.168.99.100:9191
If you are running docker on Ubuntu machine as native you should be able to access your container with localhost.
If you are using Mac or Windows your docker container runs not on local host but on its IP. you can get your container ip with command docker inspect <container id> | grep IPAddress or if your are using docker-machine docker-machine ip <docker_machine_name>
Related info:
http://networkstatic.net/10-examples-of-how-to-get-docker-container-ip-address/
https://docs.docker.com/machine/reference/ip/
How to get a Docker container's IP address from the host?
so your curl call should be something like this curl <container_ip>:<container_exposed_port>
also you can tag your image on build command with param -t like this:
docker build -t my:image .
Another tip you can optimize your dockerfile by combining yum install commands like this:
RUN yum install -y \
epel-release \
wget \
httpd
http://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/
I have some Docker containers, that contains several OSes. So I would like to make reacheable (via SSH) these containers directly from the Internet. I can use up only one public IP address. Now there is docker0 in bridge mode with its default IP. How can I configure Docker to make accessible containers separately from everywhere?
You do this by mapping each of your containers ssh port to a different port on the public ip address.
Like:
$ docker run -d -p 22000:22 --name sshcontainer1 some_image
$ docker run -d -p 22001:22 --name sshcontainer2 some_image
$ docker run -d -p 22002:22 --name sshcontainer3 some_image
...
Then you communicate this port [to your customer]. Done.
The docker documentation has an example of setting an ssh server.
https://docs.docker.com/examples/running_ssh_service/
I built a docker container with docker 1.0, and tried to push it to a private docker registry mapped to s3, but it gives me "invalid registry endpoint".
docker push loca.lhost:5000/company/appname
2014/06/20 12:50:07 Error: Invalid Registry endpoint: Get http://loca.lhost:5000/v1/_ping: read tcp 127.0.0.1:5000: connection reset by peer
The registry was started following settings similar to the example (adding aws region), and does respond if I do a telnet localhost 5000.
docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=my-docker-images \
-e STORAGE_PATH=/registry \
-e AWS_KEY=AAAA \
-e AWS_SECRET=BBBBBBB \
-e AWS_REGION=eu-west-1 \
-e SEARCH_BACKEND=sqlalchemy \
-p 5000:5000 \
registry &
s3 logging for the bucket:
8029384029384092830498 my-docker-images [16/Jun/2014:19:25:56 +0000] 123.123.123.127 arn:aws:iam::1234567890:user/docker-image-manager C9976333A1EFBB7A REST.GET.BUCKET - "GET /?prefix=registry/repositories/&delimiter=/ HTTP/1.1" 200 - 291 - 39 39 "-" "Boto/2.27.0 Python/2.7.6 Linux/3.8.0-42-generic" -
Ok, it was due to me specifying AWS_REGION (eu-west-1) and the registry service failing part way through startup.
Taking that out, the registry server finishes initializing and starts listening on the port, and a curl request to the /_ping url returned a response.
https://github.com/dotcloud/docker-registry/issues/400
I was able to retrieve enough console information to debug this by putting the settings in a config.yml file, setting loglevel to debug, then have docker running the registry image passing the config file rather than calling directly as I did above.
I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143