The scene is: I want to exec docker run & push in docker runner, and the docker registry and docker runner is in same server. so I want to pass host ip as variable into drone pipeline container so I can push docker image without a remote registry server. But it seem that only drone allowable environment variable can be used in ‘${}’. I try to export EXTERNALIP in host machine and try to get ${EXTERNALIP} but got nothing.
so Is there some way I can get external ip for communicating to localhost or another way to achieve this?
You should be able to push to localhost if its on the same host, that said, I was not able to do this using the packages plugin but was able to to replicate using direct docker:
steps:
- name: docker-${DRONE_EVENT}
image: docker:19.03
when:
event: [ push, pull_request ]
status: [ success ]
environment:
DOCKER_PASSWORD:
from_secret: docker_password
commands:
- echo $DOCKER_PASSWORD | docker login --username user_name --password-stdin localhost
- docker build -t localhost/demo-web:latest .
- if [ "${DRONE_EVENT}" == "push" ]; then docker push localhost/demo-web:latest; fi;
volumes:
- name: docker-socket
path: /var/run/docker.sock
volumes:
- name: docker-socket
host:
path:
/var/run/docker.sock
Couple caveats, obviously you will need to have trusted access in the repo configuration or --trusted if using local exec. Enjoy!
Related
Can't login to my private docker registry from the gitlab-ci.
Scenario:
gitlab CE omnibus installation, the registry is inside the gitlab.
gitlab-runner with docker executor running as container in a docker swarm cluster
gitlab-runner has a ca.crt in /etc/gitlab-runner/certs/
The ca.crt contain the server, the intermediate and the root certificate in the correct order.
It's not a sel-signed certificate, it's a wildcard certificate (*.domain.com)
Inside the gitlab-runner container I can run curl https://registry.domain.com without erro
What I have tried:
Add the registry as insecure (daemon.json and in the .gitlab-ci.yaml)
Add the certificate in the runner as registry.domain.com.crt
.gitlab-ci.yml
build_image:
image: docker:19.03.8
services:
- name: docker:19.03.12-dind
command: ["--insecure-registry=registry.domain.com:443"]
alias: docker
stage: build
...
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.domain.com
obs: I already saw this without success.
I still don't know what caused this issue but the solution was mount docker socket in the gitlab-runner
gitlab-runner register <other_options> --docker-volumes /var/run/docker.sock:/var/run/docker.sock
Suppose for example that I want to make an SSH host in Docker. I understand that I can EXPOSE 22 inside Dockerfile. I also understand that I can use -p 22222:22 so I can SSH into that Docker container from another physical machine on my LAN on port 22222 as ssh my_username#docker_host_ip -p 22222:22. But suppose that I'm so lazy that I can't be bothered to docker run the container with the option -p 22222:22 every time. Is there a way that the option -p 22222:22 can be automated in a config file somewhere? InDockerfile` maybe?
You can use docker compose
You can defind listening port in docker-compose.yml file as below:
version: '2'
services:
web:
image: ubuntu
ssh_service:
build: .
command: ssh ....
volumes:
- .:/code
ports:
- "22222:22"
depends_on:
- web
Hi I have a requirement of connecting three docker containers so that they can work together. I call these three containers as
container 1 - pga (apache webserver at port 80)
container 2 - server (apache airavata server at port 8930)
container 3 - rabbit (RabbitMQ at port 5672)
I have started rabbitMQ as (container 3)
docker run -i -d --name rabbit -p 15672:15672 -t rabbitmq:3-management
I have started server (container 2) as
docker run -i -d --name server --link rabbit:rabbit --expose 8930 -t airavata_server /bin/bash
Now from inside server(container 2) I can access rabbit (container 3) at port 5672. When i try
nc -zv container_3_port 5672 it says connection successful.
Till this point I am happy with the docker connection through link.
Now I have created another container pga(container 1) as
docker run -i -d --name pga --link server:server -p 8080:80 -t psaha4/airavata_pga /bin/bash
now from inside the new pga container when I am trying to access the service of server (container 2) its saying connection refuse.
I have verified that from inside server container service is running at 8930 port and it was exposed while creating the container but still its refusing the connection from other containers to which it is linked.
I could not find a similar situation described by anyone anywhere and also clueless how to debug the same. Please help me find out a way.
The output of command: docker exec server lsof -i :8930
exec: "lsof": executable file not found in $PATH
Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
Error starting exec command in container fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2: Cannot run exec command fb207d2fe5b902419c31cb8466bcee4ba551b097c39a7405824c320fcc67f5e2 in container 995b86032b0421c5199eb635bd65669b1aa93f96b60da4a49328050f7048197a: [8] System error: exec: "lsof": executable file not found in $PATH
NOTE: Intend to expand on this but my kid's just been sick. Will address debugging issue from question when I get a chance.
You may find it easier to use docker-compose for this as it lets you run them all with one command and keep the configuration under source control. An example configuration file (from my website) looks like this:
database:
build: database
env_file:
- database/.env
api:
build: api
command: /opt/server/dist/build/ILikeWhenItWorks/ILikeWhenItWorks
env_file:
- api/.env
links:
- database
tty:
false
volumes:
- /etc/ssl/certs/:/etc/ssl/certs/
- api:/opt/server/
webserver:
build: webserver
ports:
- "80:80"
- "443:443"
links:
- api
volumes_from:
- api
I find these files very readable and comprehensible, they essentially say exactly what they're doing. You can see how it relates to the surrounding directory structure in my source code.
i've got some docker conatiners and now I want to access into one with ssh. Thats working I got a connection via ssh to the docker container.
But now I have the problem I don't know with which user I can access into this container?
I've tried it with both users I have on the host machine (web & root). But they don't work.
What to do know?
You can drop directly into a running container with:
$ docker exec -it myContainer /bin/bash
You can get a shell on a container that is not running with:
$ docker run -it myContainer /bin/bash
This is the preferred method of getting a shell on a container. Running an SSH server is considered not a good practice and, although there are some use cases out there, should be avoided when possible.
If you want to connect directly into a Docker Container, without connecting to the docker host, your Dockerfile should include the following:
# SSH login fix. Otherwise user is kicked off after login
RUN echo 'root:pass' | chpasswd
RUN mkdir /var/run/sshd
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then use docker run with -p and -d flags. Example:
docker run -p 8022:22 -d your-docker-image
You can connect with:
ssh root#your-host -p8022
1.issue the command docker inspect (containerId or name)
You will get a result like this
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"my_bridge": {
"IPAMConfig": {
"IPv4Address": "172.17.0.20"
},
"Links": null,
"Aliases": [
"3784372432",
"xxx",
"xxx2"
],
"NetworkID": "ff7ea463ae3e6e6a099e0e044610cdcdc45b21f7e8c77a814aebfd3b2becd306",
"EndpointID": "6be4ea138f546b030bb08cf2c8af0f637e8e4ba81959c33fb5125ea0d93af967",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.20",
"IPPrefixLen": 24,
...
read out and copy the IP address from there, connect to it via command ssh existingUser#IpAddress , eg
someExistingUser#172.17.0.20. If the user doesn't exist, create him in the guest image, preferably with the sudo privileges. Probably don't use a root user directly, since as far as I know, that user is preset for connecting to the image via ssh keys, or has a preset password and changing it would probably end up in not being able to ssh connect to the image terminal via a regular way of doing it docker exec -it containerName /bin/bash or docker-compose exec containerName /bin/bash
For some case, enabling SSH in docker container is useful, specially when we want to test some scripts.
The link bellow give a good example how to create and image with ssh enabled and how to get it's IP and connect to it.
Here
If a true SSH connection into the container is needed (i.e. to allow isolated access over the internet), this image from the linuxserver.io guys could be a great solution: https://hub.docker.com/r/linuxserver/openssh-server
Much more robust solution is pulling down nsenter to your sever, then sshing in and running docker-enter from there. That way you don't need to run multiple processes in the container (ssh server + whatever the container is for), or worry about all the extra overhead of ssh users and such (not to mention security concerns).
The idea behind containers is that a container runs a single process so that it can be monitored by the daemon. If this process stops || fails for some reason, it can be restarted depending on your preference in your config. An ssh server is a running process. Therefore, if you need ssh access to your setup, make an ssh server service, which can share Volumes with other containers that are running alongside it in the setup.
To open a shell on a container in a host directly:
Imagine you are on your PC at home and you have a remote machine that runs docker and has running containers, and you want to open a shell on the container directly without "stopping by" on the remote host:
(The -t flag exposes tty)
ssh -t user#remote.host 'docker exec -it running_container_name /bin/bash'
If you are already on the host, like the accepted answer:
(The -i interactive -t tty)
docker exec -it running_container_name /bin/bash
I want to run rabbitmq-server in one docker container and connect to it from another container using celery (http://celeryproject.org/)
I have rabbitmq running using the below command...
sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
and running the celery via
sudo docker run -i -t markellul/celery /bin/bash
When I am trying to do the very basic tutorial to validate the connection on http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html
I am getting a connection refused error:
consumer: Cannot connect to amqp://guest#127.0.0.1:5672//: [Errno 111]
Connection refused.
When I install rabbitmq on the same container as celery it works fine.
What do I need to do to have container interacting with each other?
[edit 2016]
Direct links are deprecated now. The new way to do link containers is docker network connect. It works quite similar to virtual networks and has a wider feature set than the old way of linking.
First you create your named containers:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
docker run --name celery -it markellul/celery /bin/bash
Then you create a network (last parameter is your network name):
docker network create -d bridge --subnet 172.25.0.0/16 mynetwork
Connect the containers to your newly created network:
docker network connect mynetwork rabbitmq
docker network connect mynetwork celery
Now, both containers are in the same network and can communicate with each other.
A very detailed user guide can be found at Work with networks: Connect containers.
[old answer]
There is a new feature in Docker 0.6.5 called linking, which is meant to help the communication between docker containers.
First, create your rabbitmq container as usual. Note that i also used the new "name" feature which makes life a litte bit easier:
docker run --name rabbitmq -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server
You can use the link parameter to map a container (we use the name here, the id would be ok too):
docker run --link rabbitmq:amq -i -t markellul/celery /bin/bash
Now you have access to the IP and Port of the rabbitmq container because docker automatically added some environmental variables:
$AMQ_PORT_5672_TCP_ADDR
$AMQ_PORT_5672_TCP_PORT
In addition Docker adds a host entry for the source container to the /etc/hosts file. In this example amq will be a defined host in the container.
From Docker documentation:
Unlike host entries in the /etc/hosts file, IP addresses stored in the environment variables are not automatically updated if the source container is restarted. We recommend using the host entries in /etc/hosts to resolve the IP address of linked containers.
Just get your container ip, and connect to it from another container:
CONTAINER_IP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CONTAINER_ID)
echo $CONTAINER_IP
When you specify -p 5672, What docker does is open up a new port, such as 49xxx on the host and forwards it to port 5672 of the container.
you should be able to see which port is forwarding to the container by running:
sudo docker ps -a
From there, you can connect directly to the host IP address like so:
amqp://guest#HOST_IP:49xxx
You can't use localhost, because each container is basically its own localhost.
Create Image:
docker build -t "imagename1" .
docker build -t "imagename2" .
Run Docker image:
docker run -it -p 8000:8000 --name=imagename1 imagename1
docker run -it -p 8080:8080 --name=imagename2 imagename2
Create Network:
docker network create -d bridge "networkname"
Connect the network with container(imagename) created after running the image:
docker network connect "networkname" "imagename1"
docker network connect "networkname" "imagename2"
We can add any number of containers to the network.
docker network inspect ''networkname"
I think you can't connect to another container directly by design - that would be the responsibility of the host. An example of sharing data between containers using Volumes is given here http://docs.docker.io/en/latest/examples/couchdb_data_volumes/, but I don't think that that is what you're looking for.
I recently found out about https://github.com/toscanini/maestro - that might suit your needs. Let us know if it does :), I haven't tried it myself yet.
Edit. Note that you can read here that native "Container wiring and service discovery" is on the roadmap. I guess 7.0 or 8.0 at the latest.
You can get the docker instance IP with...
CID=$(sudo docker run -d -p :5672 markellul/rabbitmq /usr/sbin/rabbitmq-server); sudo docker inspect $CID | grep IPAddress
But that's not very useful.
You can use pipework to create a private network between docker containers.
This is currently on the 0.8 roadmap:
https://github.com/dotcloud/docker/issues/1143