Connecting to a running docker container - differences between using ssh and running a command with "-t -i" parameters - ssh

Could you please point me what is the difference between installing openssh-server and starting a ssh session with a given docker container and running docker run -t -i ubuntu /bin/bash and then performing some operations. How does docker attach compare to those two methods?

Difference 1. If you want to use ssh, you need to have ssh installed on the Docker image and running on your container. You might not want to because of extra load or from a security perspective. One way to go is to keep your images as small as possible - avoids bugs like heartbleed ;). Whether you want ssh is a point of discussion, but mostly personal taste. I would say only use it for debugging, and not to actually change your image. If you would need the latter, you'd better make a new and better image. Personally, I have yet to install my first ssh server on a Docker image.
Difference 2. Using ssh you can start your container as specified by the CMD and maybe ENTRYPOINT in your Dockerfile. Ssh then allows you to inspect that container and run commands for whatever use case you might need. On the other hand, if you start your container with the bash command, you effectively overwrite your Dockerfile CMD. If you then want to test that CMD, you can still run it manually (probably as a background process). When debugging my images, I do that all the time. This is from a development point of view.
Difference 3. An extension of the 2nd, but from a different point of view. In production, ssh will always allow you to check out your running container. Docker has other options useful in this respect, like docker cp, docker logs and indeed docker attach.
According to the docs "The attach command will allow you to view or interact with any running container, detached (-d) or interactive (-i). You can attach to the same container at the same time - screen sharing style, or quickly view the progress of your daemonized process." However, I am having trouble in actually using this in a useful manner. Maybe someone who uses it could elaborate in that?
Those are the only essential differences. There is no difference for image layers, committing or anything like that.

Related

Running docker commands with an user without root privileges (possibly with www-data user of Apache)

I am developing a simple Flask application (configured with a Apache webserver) which provides a web interface for docker management. My apache server runs as ‘www-data’ user and it uses the same for all of its API operations.
But i get the ‘Permission denied’ error for the following,
docker images
docker run, etc…
as it doesnt allow ‘www-data’ user to run the above commands.
Can you please provide me a suggestion on using the ‘www-data’ user for docker operations.
I dont want to add ‘www-data’ user to sudoers list.
Is adding the user to docker group alone will be a proper solution ???
Or please suggest me a best practice solution for this.
Thanks
GuruPrasad
It would be easier, clearer, and no less dangerous to tell Apache to run your process as root.
Remember that, if you can run any Docker command at all, you can trivially get unrestricted root-level access to anything on the system. For example, if your tool decides it really does want www-data to be in the host's sudoers list, it can
docker run --rm -v /:/host busybox \
sh -c 'echo www-data ALL = (ALL) NOPASSWD: ALL >> /host/etc/sudoers'
Depending on what your management tool does, it potentially is offering equal unprotected root-level access to the host to anyone who can reach the Web page. Even if it isn't, you need to be extremely careful with how you invoke Docker (another SO answer I was looking at had the potential to root the system if a user could create a directory with an arbitrary name and run the script from there, for instance).

gcloud compute ssh connects shows wrong instance name

I'm pretty new to the Gcloud environment, but getting the hang of it.
Though with our first project live on an instance, I've been shuffeling some static IP's, instances and snapshots around for optimal deployment workflow. Though whats going on now, I can't understand;
I have two instances (i.e.) live-1 and dev-2.
Now I can connect to live-1 using gcloud compute ssh live-1 and it's okay.
When I try to connect to dev-2 using gcloud compute ssh dev-2, it logs me in to live-1.
The first time I tried to ssh to dev-2 it took longer than usual. After that it just connects me to the wrong instance immediately.
The goal was (as you might've guessed) to copy the live environment to a testing one. I did create an image of live-1, and cloned it to setup dev-2 with it. But in my earlier experience trying this, this was possible and worked as expected.
Whenever I use the Compute Console in the browser and use the online SSH tool from the instance list, it does connect to dev-2 properly. But on my local machine, using aformentioned command, connects me to live-1.
I already removed the IP for dev-2 from my known hosts, figuring it's cached somewhere, but no luck. What am I missing here?
Edit: I found out just now that the instances are separated though 'named' the same; if I login to dev-2, I do see myuser#live-1: in the shell, but it appears it is running a separate instance. I created a dummy file on the supposed dev-2, and it doesn't show up at the actual live-1 machine.
So this is very confusing; I rely on the 'user-tag' thing in front of every shell line to know where and what I'm actually working on; having two instances with the same name but different environments is confusing.
Ok, it was dead simple. Just run sudo hostname [desiredhostname] in the terminal, and restart it.
So in my case I logged in to dev-2 and ran sudo hostname dev-2.

Mappings between Docker Remote API and its command line client

Docker documentation is pretty good at describing what you can do from the command line.
It also gives a pretty comprehensive description of the commands associated with the remote API.
It does not, however, appear to give sufficient context for using the remote API to do things that one would do using the command line.
An example of what I am talking about: suppose you want to do a command like:
docker run --rm=true -i -t -v /home/user/resources:/files -p 8080:8080 --name SomeService myImage_v3
using the Remote API. There is a container "run" command in the Remote API:
POST /containers/(id or name)/start
And this command refers back to the create container command for the rather long list of JSON strings that you would need to add in order to do the actual start.
The problem here is: first, just calling this command doesn't work. Apparently there is more that you have to do (I am guessing you have to do a create, then a start). Second, it is unclear which JSON strings you need to use in order to do what I showed in the command line (like setting ports, mapping to the external directory, etc). Not only do the JSON strings provided in the remote API documentation not line up with the command line parameters (at least, not in any way that is obvious!), but it is unclear which JSON strings are required for the create (assuming that we have to do a create, which isn't established yet!) and which are required for the start.
This is just related to starting a container. Suppose you want to stop and destroy a container, as in:
docker stop SomeService
docker rm SomeService
Granted, there appear to be one- to- one commands for doing this in the remote API:
POST /containers/(id or name)/stop
POST /containers/(id or name)/kill
But it seems that the IDs you can pass them do not correspond to the IDs shown when you list containers or images.
Is there somewhere I can go to gather information on how to set up and use remote API commands that relates these commands and their JSON parameters to the commands and parameters in the command line?
Failing that, can someone please tell me how to do the start that I showed in my illustration using the remote API???
In any event: is there someone working on docker development I can bring these documentation issues to? It is, I believe, a big "hole" in their documentation.
Someone please advise...
docker run is a combination of docker create, followed by docker start, so https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#create-a-container, followed by https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#start-a-container
If you're running "interactively", you may need to attach to the container after that; https://docs.docker.com/engine/reference/api/docker_remote_api_v1.22/#attach-to-a-container

Docker image push over SSH (distributed)

TL;DR Basically, I am looking for this:
docker push myimage ssh://myvps01.vpsprovider.net/
I am failing to grasp the rationale behind whole Docker Hub / Registry thing. I know I can run a private registry, but for that I have to set up the infrastructure of actually running a server.
I took a sneak peek inside the inner workings of Docker (well, the filesystem at least), and it looks like Docker image layers are just a bunch of tarballs, more or less, with some elaborate file naming. I naïvely think it would not be impossible to whip up a simple Python script to do distributed push/pull, but of course I did not try, so that is why I am asking this question.
Are there any technical reasons why Docker could not just do distributed (server-less) push/pull, like Git or Mercurial?
I think this would be a tremendous help, since I could just push the images that I built on my laptop right onto the app servers, instead of first pushing to a repo server somewhere and then pulling from the app servers. Or maybe I have just misunderstood the concept and the Registry is a really essential feature that I absolutely need?
EDIT Some context that hopefully explains why I want this, consider the following scenario:
Development, testing done on my laptop (OSX, running Docker machine, using docker-compose for defining services and dependencies)
Deploy to a live environment by means of a script (self-written, bash, few dependencies on dev machine, basically just Docker machine)
Deploy to a new VPS with very few dependencies except SSH access and Docker daemon.
No "permanent" services running anywhere, i.e. I specifically don't want to host a permanently running registry (especially not accessible to all the VPS instances, though that could probably be solved with some clever SSH tunneling)
The current best solution is to use Docker machine to point to the VPS server and rebuild it, but it slows down deployment as I have to build the container from source each time.
If you want to push docker images to a given host, there is already everything in Docker to allow this. The following example shows how to push a docker image through ssh:
docker save <my_image> | ssh -C user#my.remote.host.com docker load
docker save will produce a tar archive of one of your docker images (including its layers)
-C is for ssh to compress the data stream
docker load creates a docker image from a tar archive
Note that the combination of a docker registry + docker pull command has the advantage of only downloading missing layers. So if you frequently update a docker image (adding new layers, or modifying a few last layers) then the docker pull command would generate less network traffic than pushing complete docker images through ssh.
I made a command line utility just for this scenario.
It sets up a temporary private docker registry on the server, establishes an SSH Tunnel from your localhost, pushes your image, then cleans up after itself.
The benefit of this approach over docker save is that only the new layers are pushed to the server, resulting in a quicker upload.
Oftentimes using an intermediate registry like dockerhub is undesirable, and cumbersome.
https://github.com/brthor/docker-push-ssh
Install:
pip install docker-push-ssh
Example:
docker-push-ssh -i ~/my_ssh_key username#myserver.com my-docker-image
Biggest caveat is that you have to manually add your local ip to docker's insecure_registries config.
https://stackoverflow.com/questions/32808215/where-to-set-the-insecure-registry-flag-on-mac-os
Saving/loading an image on to a Docker host and pushing to a registry (private or Hub) are two different things.
The former #Thomasleveil has already addressed.
The latter actually does have the "smarts" to only push required layers.
You can easily test this yourself with a private registry and a couple of derived images.
If we have two images and one is derived from the other, then doing:
docker tag baseimage myregistry:5000/baseimage
docker push myregistry:5000/baseimage
will push all layers that aren't already found in the registry. However, when you then push the derived image next:
docker tag derivedimage myregistry:5000/derivedimage
docker push myregistry:5000/derivedimage
you may noticed that only a single layer gets pushed - provided your Dockerfile was built such that it only required one layer (e.g. chaining of RUN parameters, as per Dockerfile Best Practises).
On your Docker host, you can also run a Dockerised private registry.
See Containerized Docker registry
To the best of my knowledge and as of the time of writing this, the registry push/pull/query mechanism does not support SSH, but only HTTP/HTTPS. That's unlike Git and friends.
See Insecure Registry on how to run a private registry through HTTP, especially be aware that you need to change the Docker engine options and restart it:
Open the /etc/default/docker file or /etc/sysconfig/docker for
editing.
Depending on your operating system, your Engine daemon start options.
Edit (or add) the DOCKER_OPTS line and add the --insecure-registry
flag.
This flag takes the URL of your registry, for example.
DOCKER_OPTS="--insecure-registry myregistrydomain.com:5000"
Close and save the configuration file.
Restart your Docker daemon
You will also find instruction to use self-signed certificates, allowing you to use HTTPS.
Using self-signed certificates
[...]
This is more secure than the insecure registry solution. You must
configure every docker daemon that wants to access your registry
Generate your own certificate:
mkdir -p certs && openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \ -x509 -days 365 -out certs/domain.crt
Be sure to use the name myregistrydomain.com as a CN.
Use the result to start your registry with TLS enabled
Instruct every docker daemon to trust that certificate.
This is done by copying the domain.crt file to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt.
Don’t forget to restart the Engine daemon.
Expanding on the idea of #brthornbury.
I did not want to dabble with running python, so I came up with bash script for the same.
#!/usr/bin/env bash
SOCKET_NAME=my-tunnel-socket
REMOTE_USER=user
REMOTE_HOST=my.remote.host.com
# open ssh tunnel to remote-host, with a socket name so that we can close it later
ssh -M -S $SOCKET_NAME -fnNT -L 5000:$REMOTE_HOST:5000 $REMOTE_USER#$REMOTE_HOST
if [ $? -eq 0 ]; then
echo "SSH tunnel established, we can push image"
# push the image to remote host via tunnel
docker push localhost:5000/image:latest
fi
# close the ssh tunnel using the socket name
ssh -S $SOCKET_NAME -O exit $REMOTE_USER#$REMOTE_HOST

Dockerfile privileged flag for Docker container (Needed because of Apache error ulimit ) AWS

I would like to start a container with privileges. Manually I can do that directly by typing:
sudo docker run -privileged name/image
But how can I generated a container from a Dockerfile with privileges, is there any command to do that in the dockerfile?
In my case I am doing a deployment in amazon, in case it can not be done from a Dockerfile can it be done from the Dockerrun.aws.json?
PS. To give some context to the question, I need privileges in the docker container to be able to change the ulimit because of apache.
Edit:
I don't change it locally in the container because in Docker the ulimit of the container is the one of the host. That is why the change doesn't affect the container if I change it locally.
Running the container with elevated privileges probably raises all sorts of security and reliability issues.
I would suggest that rather than starting the whole Docker session with elevated privileges, which will potentially mean that everything run on it will have elevated privileges, instead you create a docker container with an changed number set for ulimit.
I am not an expert but the instructions for creating your own container look clear enough then sudo vi /etc/security/limits.conf within your new container, changing soft nofile and soft nproc, save and then export the new container seems the way to go. You can then run the new container with normal privilege levels.
The other option that seems to be used in many places is to run multiple container instances so as to avoid congestion issues.