Invalid Registry Endpoint pushing docker image - amazon-s3

I built a docker container with docker 1.0, and tried to push it to a private docker registry mapped to s3, but it gives me "invalid registry endpoint".
docker push loca.lhost:5000/company/appname
2014/06/20 12:50:07 Error: Invalid Registry endpoint: Get http://loca.lhost:5000/v1/_ping: read tcp 127.0.0.1:5000: connection reset by peer
The registry was started following settings similar to the example (adding aws region), and does respond if I do a telnet localhost 5000.
docker run \
-e SETTINGS_FLAVOR=s3 \
-e AWS_BUCKET=my-docker-images \
-e STORAGE_PATH=/registry \
-e AWS_KEY=AAAA \
-e AWS_SECRET=BBBBBBB \
-e AWS_REGION=eu-west-1 \
-e SEARCH_BACKEND=sqlalchemy \
-p 5000:5000 \
registry &
s3 logging for the bucket:
8029384029384092830498 my-docker-images [16/Jun/2014:19:25:56 +0000] 123.123.123.127 arn:aws:iam::1234567890:user/docker-image-manager C9976333A1EFBB7A REST.GET.BUCKET - "GET /?prefix=registry/repositories/&delimiter=/ HTTP/1.1" 200 - 291 - 39 39 "-" "Boto/2.27.0 Python/2.7.6 Linux/3.8.0-42-generic" -

Ok, it was due to me specifying AWS_REGION (eu-west-1) and the registry service failing part way through startup.
Taking that out, the registry server finishes initializing and starts listening on the port, and a curl request to the /_ping url returned a response.
https://github.com/dotcloud/docker-registry/issues/400
I was able to retrieve enough console information to debug this by putting the settings in a config.yml file, setting loglevel to debug, then have docker running the registry image passing the config file rather than calling directly as I did above.

Related

Cannot SSH Dataproc Master in Cluster

I've been trying to create a cluster in Dataproc using as initialization script the jupyter repository.
But when I try to ssh to the master so to be able to access the Jupyter interface running this command:
gcloud compute ssh --zone=zone_name \
--ssh-flag="-D 10000" \
--ssh-flag="-N" \
--ssh-flag="-n" "cluster1-m" &
I get the error:
Permission denied (publickey). ERROR: (gcloud.compute.ssh)
[/usr/bin/ssh] exited with return code [255].
I could confirm that all ssh keys are created normally. I tried this other option then:
gcloud compute ssh --zone=zone_name \
--ssh-flag="-D 10000" \
--ssh-flag="-N" \
--ssh-flag="-n" "will#cluster1-m" &
Which seems to work as I can ssh into the instance but now I get the error:
bind: Cannot assign requested address
channel_setup_fwd_listener_tcpip: cannot listen to port: 10000 Could
not request local forwarding.
For creating the cluster I used:
gcloud dataproc clusters create $CLUSTER_NAME \
--metadata "JUPYTER_PORT=8124,JUPYTER_CONDA_PACKAGES=numpy:pandas:scikit-learn:jinja2:mock:pytest:pytest-cov" \
--initialization-actions \
gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--bucket $BUCKET_NAME
and I'm running this in a docker image Debian 8.9 (jessie).
If you need any extra information please let me know.
If you verified you can do a normal SSH into the cluster, then if you're just getting the bind: Cannot assign requested address error it probably means you have another SSH session with local port forwarding already on your current machine using port 10000 already. If you see the bind error, you should always first try a different local port, like -D 12345. You can use top or your task manager to check if you have a hanging ssh -D command somewhere still running and occupying port 10000.

Anonymous pull on docker repo in artifactory

I am on artifactory version 4.6 and have the following requirement on the docker registry.
Allow anonymous pulls on docker repository
Force authentication on the SAME docker repository
I know this is avaliable out of the box on the later versions of artifactory. However upgrading isnt an option for us for a while.
Does the following work around work?
Create a virtual docker repository on port 8443 and don't force authentication , call it docker-virtual
Create a local docker repository and force authentication, call it docker-local on port 8444
Configure 'docker-virtual' with the default deployment directory as 'docker-local'
docker pull docker-virtual should work
docker push docker-virtual should ask for credentials
Upon failure , I should be able to docker login docker-virtual
and docker push docker-virtual/myImage
Not sure about the artifactory side, but perhaps the following Docker advice helps.
You can start run two registries, one RW with authentication, and a second RO without any authentication, in Docker:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_AUTH=htpasswd" \
-e "REGISTRY_AUTH_HTPASSWD_REALM=My Registry" \
-e "REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
docker run -d -p 5001:5000 --restart=always --name registry-ro \
-v `pwd`/certs:/certs:ro \
-v `pwd`/auth/htpasswd:/auth/htpasswd:ro \
-v `pwd`/registry:/var/lib/registry:ro \
-e "REGISTRY_HTTP_TLS_CERTIFICATE=/certs/host-cert.pem" \
-e "REGISTRY_HTTP_TLS_KEY=/certs/host-key.pem" \
-e "REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY=/var/lib/registry" \
registry:2
Note the volume settings for /var/lib/registry in each container. Then to pull from the anonymous registry, you'd just need to change the port. Since the filesystem is RO, any attempt to push to 5001 will fail.
The closest thing you can achieve is failing on docker push without credentials (while succeeding with pull).
No idea if this works with artifactory sorry.... you could try this handy project for docker registry auth.
Configure the registry to use this https://hub.docker.com/r/cesanta/docker_auth/
# registry config.yml
...
auth:
token:
# can be the same as your docker registry if you use nginx to proxy /auth to docker_auth
# https://docs.docker.com/registry/recipes/nginx/
realm: "example.com:5001/auth"
service: "Docker registry"
issuer: "Docker Registry auth server"
rootcertbundle: /certs/domain.crt
And allow anonymous with the corresponding ACL
# cesanta/docker_auth auth_config.yml
...
users:
# Password is specified as a BCrypt hash. Use htpasswd -B to generate.
"admin":
password: "$2y$05$LO.vzwpWC5LZGqThvEfznu8qhb5SGqvBSWY1J3yZ4AxtMRZ3kN5jC" # badmin
"": {} # Allow anonymous (no "docker login") access.
ldap_auth:
# See: https://github.com/cesanta/docker_auth/blob/master/examples/ldap_auth.yml
acl:
# See https://github.com/cesanta/docker_auth/blob/master/examples/reference.yml#L178
- match: {account: "/.+/"}
actions: ["*"]
comment: "Logged in users do anything."
- match: {account: ""}
actions: ["pull"]
comment: "Anonymous users can pull anything."
# Access is denied by default.

Restcomm in Amazon ECS

I am trying to run Restcomm using the docker image in Amazon ECS but I found some problems.
This is the command I'm running:
docker run
-e ENVCONFURL="https://raw.githubusercontent.com/RestComm/Restcomm-Docker/master/scripts/restcomm_env_basicAmazon.sh"
-p 80:80 -p 443:443 -p 9990:9990 -p 5060:5060 -p 5061:5061 -p 5062:5062 -p 5063:5063 -p 5060:5060/udp -p 65000-65050:65000-65050/udp
restcomm/restcomm:latest
I'm able to access to the administration portal, olympus and RVD but when I call +1234 or receive a call from Nexmo, it fails, here the logs: https://gist.github.com/antonmry/61ec970be3ff9fd923538899768bbc76
I guess the problem is related to run restcomm_env_basicAmazon.sh but not sure about that. How do you run it in Amazon? Some help would be welcomed.
Best regards,
Antón
I can see in the logs below that you didn't specify the VoiceRSS key or the free VoiceRSS key that ships by default is all consumed. Please create a new VoiceRSS key, set it in your own configuration file and retry
[0m[31m19:36:02,601 ERROR [org.mobicents.servlet.restcomm.tts.VoiceRSSSpeechSynthesizer] (RestComm-akka.actor.default-dispatcher-111) There was an exception while trying to synthesize message: org.mobicents.servlet.restcomm.tts.api.SpeechSynthesizerException: ERROR: The API key is not available!
[0m[0m19:36:02,602 INFO [org.mobicents.servlet.restcomm.interpreter.VoiceInterpreter] (RestComm-akka.actor.default-dispatcher-111) ********** VoiceInterpreter's akka://RestComm/user/$y Current State: synthesizing

Connection refused in Docker containers communicating through exposed ports

Hi I have a requirement of connecting three docker containers so that they can work together. I call these three containers as
container 1 - pga (apache webserver at port 80)
container 2 - server (apache airavata server at port 8930)
container 3 - rabbit (RabbitMQ at port 5672)
I have started rabbitMQ as (container 3)
docker run -i -d --name rabbit -p 15672:15672 -t rabbitmq:3-management
I have started server (container 2) as
docker run -i -d --name server --link rabbit:rabbit --expose 8930 -t airavata_server /bin/bash
Now from inside server(container 2) I can access rabbit (container 3) at port 5672. When i try
nc -zv container_3_port 5672 it says connection successful.
Till this point I am happy with the docker connection through link.
Now I have created another container pga(container 1) as
docker run -i -d --name pga --link server:server -p 8080:80 -t psaha4/airavata_pga /bin/bash
now from inside the new pga container when I am trying to access the service of server (container 2) its saying connection refuse.
I have verified that from inside server container service is running at 8930 port and it was exposed while creating the container but still its refusing the connection from other containers to which it is linked.
I could not find a similar situation described by anyone anywhere and also clueless how to debug the same. Please help me find out a way.
The output of command: docker exec server lsof -i :8930
exec: "lsof": executable file not found in $PATH
Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
Error starting exec command in container fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2: Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
NOTE: Intend to expand on this but my kid's just been sick. Will address debugging issue from question when I get a chance.
You may find it easier to use docker-compose for this as it lets you run them all with one command and keep the configuration under source control. An example configuration file (from my website) looks like this:
database:
build: database
env_file:
- database/.env
api:
build: api
command: /opt/server/dist/build/ILikeWhenItWorks/ILikeWhenItWorks
env_file:
- api/.env
links:
- database
tty:
false
volumes:
- /etc/ssl/certs/:/etc/ssl/certs/
- api:/opt/server/
webserver:
build: webserver
ports:
- "80:80"
- "443:443"
links:
- api
volumes_from:
- api
I find these files very readable and comprehensible, they essentially say exactly what they're doing. You can see how it relates to the surrounding directory structure in my source code.

Setting a static IP to docker container using LXC driver

I installed docker to my ubuntu 14.04 laptop. I pulled docker registry image from the central registry. To fix IP address of the container to a static value, I first changed my /etc/defaults/docker and added -e lxc to DOCKER_OPTS variable.
Then to run my local registry I used the following command;
docker run \
-i -t -h myreg \
--net="none" \
--lxc-conf="lxc.network.hwaddr=91:21:de:b0:6b:61" \
--lxc-conf="lxc.network.type = veth" \
--lxc-conf="lxc.network.ipv4 = 172.17.0.20/16" \
--lxc-conf="lxc.network.ipv4.gateway = 172.17.42.1" \
--lxc-conf="lxc.network.link = docker0" \
--lxc-conf="lxc.network.name = eth0" \
--lxc-conf="lxc.network.flags = up" \
--name myreg \
-p 5000:5000 \
-d registry \
/bin/bash
Then used docker attach myreg to access to the shell of the container. After installing net-tools package, I checked the IP address of it and see that it is 172.17.0.20 as expected. I tried to ping it from my host and it was replying.
The problem is that, when I checked the configuration of this container with docker inspect myreg, the NetworkSettings part of output was as the following
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.8",
"IPPrefixLen": 16,
"PortMapping": null,
"Ports": {
"5000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5000"
}
]
}
It was showing 172.17.0.8 as the IP address of it.It is the value that should be assigned if I was not usign lxc driver. This is becoming a problem when I use docker push command to push a tagged image to this local registry. Because,docker is using this wrong IP to push image, and throws an error log as the following
de7e1cfc] +job push(127.0.0.1:5000/mongo)
2014/07/18 17:10:19 Can't forward traffic to backend tcp/172.17.0.8:5000: dial tcp 172.17.0.8:5000: no route to host
2014/07/18 17:10:22 Can't forward traffic to backend tcp/172.17.0.8:5000: dial tcp 172.17.0.8:5000: no route to host
What is the problem here? Or am I doing smt. wrong?
What version of Docker are you running? Docker 1.0 no longer uses LXC, they have replaced it with their own libcontainer. The LXC commands didn't work for me when following this blog - http://goldmann.pl/blog/2014/01/21/connecting-docker-containers-on-multiple-hosts/#_set_up
If you downgrade to 0.7 and follow the lxc process, it will work.