I'm running aCoreOS to host some Docker containers, and I want to block SSH access from containers to the CoreOS host, which I guess will be a reasonable matter of security.
I've tried to restrict this access using /etc/hosts.deny file or using iptables. The problem with this approaches is that they both needs container's IP ranges defined specifically and I can't find a guaranteed way to specify it automatically on all new hosts.
As docker documentation describes, the default network definition for bridge interface is:
$ sudo docker network inspect bridge
[
{
"name": "bridge",
"id": "7fca4eb8c647e57e9d46c32714271e0c3f8bf8d17d346629e2820547b2d90039",
"driver": "bridge",
"containers": {
"bda12f8922785d1f160be70736f26c1e331ab8aaf8ed8d56728508f2e2fd4727": {
"endpoint": "e0ac95934f803d7e36384a2029b8d1eeb56cb88727aa2e8b7edfeebaa6dfd758",
"mac_address": "02:42:ac:11:00:03",
"ipv4_address": "172.17.0.3/16",
"ipv6_address": ""
},
"f2870c98fd504370fb86e59f32cd0753b1ac9b69b7d80566ffc7192a82b3ed27": {
"endpoint": "31de280881d2a774345bbfb1594159ade4ae4024ebfb1320cb74a30225f6a8ae",
"mac_address": "02:42:ac:11:00:02",
"ipv4_address": "172.17.0.2/16",
"ipv6_address": ""
}
}
}
]
Docker documentation also says that it uses 172.17.42.1/16 for docker0 interface when it is available (which is also configurable using -b option). But I see some of containers own IPs like 10.1.41.2 and it makes it difficult to block them using hard coded IP range.
I know about --icc option which will effect inter container communication on daemon, and want to find a clean way like this and restrict SSH access from containers to their host.
Thank you.
You could use iptables to set a firewall rule preventing this.
iptables -A INPUT -i docker0 -p tcp --destination-port 22 -j DROP
(To be run on the host)
Related
I am building a Restful API client using .NET core framework. The APIs are OpenStack API, however, because of the network configuration, I cannot access the API from my local computer (also development computer), I have to ssh into a machine that can ssh into the OpenStack infrastructure when accessing OpenStack normally.
Bearing this in mind, is it possible to use SSH tunnel for the API endpoints and then call it in the implemented Web API client? I have tried to do this, but the call to the endpoint returns error 401 - content length is required.
Basically its possible to call the Openstack API endpoint through an SSH-tunnel without any public accessible API-endpoints. Bacause I have no experience with .NET core framework this answer is really generic without C# code. I hope it helps you anyway.
IMPORTANT: You can use the following steps only, when you have a admin-login to the openstack-deployment and you should ONLY!!! use this way, when the openstack-deployment is a test-deployment, where breaking the deployment doesn't affect other users.
1. SSH-tunnel
You can forward ports with the command:
ssh -L 127.0.0.1:<PORT>:<IP_REMOTE>:<PORT> <USER_JUMPHOST>#<IP_JUMPHOST> -fN
<PORT> = Port of the Openstack-Component you want to access remotely (for example 5000 for keystone)
<IP_REMOTE> = IP of the host, where your openstack deployment is running
<USER_JUMPHOST>#<IP_JUMPHOST> = ssh-access to the jumphost, which is between you and your openstack deployment
This has to be done for each openstack component. If you don't want this command in the backgroup remove the -fN at the end.
Here at first you have to forward Keystone with port 5000.
example: ssh -L 127.0.0.1:5000:192.168.62.1:5000 deployer#192.168.67.1 -fN
You can test the access via curl or webbrowser from your local pc:
curl http://127.0.0.1:5000
{"versions": {"values": [{"id": "v3.13", "status": "stable", "updated": "2019-07-19T00:00:00Z", "links": [{"rel": "self", "href": "http://127.0.0.1:5000/v3/"}], "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}]}]}}
2. change openstack-endpoints
To be also able to login on the openstack deployment through the tunnel, you have to change the endpoints to listen to the localhost on the remote system too, where your openstack depoyment is:
Login normally on your openstack deployment as admin-user.
List all endpoints: openstack endpoint list
Change the public and internal-endpoint of keystone to localhost:
openstack endpoint set --url http://127.0.0.1:5000 <ID_OF_INTERNAL_KEYSTONE_ENDPOINT>
After changing the internal endpoint it will break the openstack-login on the remotesystem for now, but don't worry.
Now you can login to the openstack via openstack-client from your local pc. Here you have to authorize against local-host. If you use an rc-file to login, you have to change the auth-url to export OS_AUTH_URL=http://127.0.0.1:5000/v3
Change the nova endpoints by running on your local pc openstack endpoint set --url "http://127.0.0.1:8774/v2.1" <ID> for the internal and public endpoint of nova to run commands like openstack server list through your ssh-tunnel (of course you need also an ssh-tunnel for port 8774) to do this.
3. Authorize against the openstack-deployment
When you send HTTP-Requests without the openstack-client, you have to manually request an authentication token from the deployment:
Login normally on your openstack deployment
Make a Token-Request:
curl -v -s -X POST "$OS_AUTH_URL/auth/tokens?nocatalog" -H "Content-Type: application/json" -d '{ "auth": { "identity": { "methods": ["password"],"password": {"user": {"domain": {"name": "'"$OS_USER_DOMAIN_NAME"'"},"name": "'"$OS_USERNAME"'", "password": "'"$OS_PASSWORD"'"} } }, "scope": { "project": { "domain": { "name": "'"$OS_PROJECT_DOMAIN_NAME"'" }, "name": "'"$OS_PROJECT_NAME"'" } } }}' --stderr - | grep X-Subject-Token
This command can be used without changes. The Value after the Key X-Subject-Token is the token from Keystone. Copy this value and export the token as the environment variable OS_TOKEN. For example like the following line
export OS_TOKEN=gAAAAABZuj0GZ6g05tKJ0hvihAKXNJgzfoT4TSCgR7cgWaKvIvbD66StJK6cS3FqzR2DosmqofnR_N-HztJXUcVhwF04HQsY9CBqQC7pblGnNIDWCXxnJiCH_jc4W-uMPNA6FBK9TT27vE5q5AIa487GcLLkeJxdchXiDJvw6wHty680eJx3kL4
Make requests with the token.
For example GET-Requests with curl:
curl -s -X GET -H "X-Auth-Token: $OS_TOKEN" http://127.0.0.1:5000/v3/users | python -m json.tool
First off: I have read the answers to similar questions on SO, but none of them worked.
IMPORTANT NOTE: The answer below is still valid, but maybe jump to the end for an alternative.
The situation:
App with GUI is running in a docker container (CentOS 7.1) under Arch Linux. (machine A)
Machine A has a monitor connected to it.
I want to access this GUI via X11 forwarding on my Arch Linux client machine. (machine B)
What works:
GUI works locally on machine A (with /tmp/.X11-unix being mounted in the Docker container).
X11 forwarding of any app running outside of docker (X11 forwarding is set up and running properly for non-docker usage).
I can even switch the user while remotely logged in, copy the .Xauthority file to the other user and X11 forwarding works as well.
Some setup info:
Docker networking is 'bridged'.
Container can reach host (firewall is open).
DISPLAY variable is set in container (to host-ip-addr:10.0 because of TCP port 6010 where sshd is listening).
Packets to X forward port (6010) are reaching the host from the container (tcpdump checked).
What does not work:
X11 forwarding of the Docker app
Errors:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: host-ip-addr:10.0
Things i tried:
starting client ssh with ssh -Y option on machine B
putting "X11ForwardTrusted yes" in ssh_config on machine B
xhost + (so allow any clients to connect) on machine B
putting Host * in ssh_config on machine B
putting X11UseLocalhost no in sshd_config on machine A (to allow non-localhost clients)
Adding the X auth token in the container with xauth add from the login user on machine A
Just copying over the .Xauthority file from a working user into the container
Making shure .Xauthority file has correct permissions and owner
How can i just disable all the X security stuff and get this working?
Or even better: How can i get it working with security?
Is there at least a way to enable extensive debugging to see where exactly the problem is?
Alternative: The first answer below shows how to effectively resolve this issue. However: I would recommend you to look into a different approach all together, namely VNC. I personally switched to a tigerVNC setup that replaces the X11 forwarding and have not looked back. The performance is just leagues above what X11 forwarding delivered for me. There might be some instances where you cannot use VNC for whatever reason, but i would try it first.
The general setup is now as follows:
-VNC server runs on machine A on the host (not inside a docker container).
-Now you just have to figure out how to get a GUI for inside a docker container (which is a much more trivial undertaking).
-If the docker container was started NOT from the VNC environment, the DISPLAY variable maybe needs ajdusting.
Thanks so much #Lazarus535
I found that for me adding the following to my docker command worked:
--volume="$HOME/.Xauthority:/root/.Xauthority:rw"
I found this trick here
EDIT:
As Lazarus pointed out correctly you also have to set the --net=host option to make this work.
Ok, here is the thing:
1) Log in to remote machine
2) Check which display was set with echo $DISPLAY
3) Run xauth list
4) Copy the line corresponding to your DISPLAY
5) Enter your docker container
6) xauth add <the line you copied>*
7) Set DISPLAY with export DISPLAY=<ip-to-host>:<no-of-display>
*so far so good right?
This was nothing new...however here is the twist:
The line printed by xauth list for the login user looks something like this (in my case):
<hostname-of-machine>/unix:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
Because i use the bridged docker setup, the X forwarding port is not listening locally, because the sshd is not running in the container. Change the line above to:
<ip-of-host>:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
In essence: Remove the /unix part.
<ip-of-host> is the IP address where the sshd is running.
Set the DISPLAY variable as above.
So the error was that the DISPLAY name in the environment variable was not the "same" as the entry in the xauth list / .Xauthority file and the client could therefor not authenticate properly.
I switched back to an untrusted X11 forwarding setting.
The X11UseLocalhost no setting in the sshd_config file however is important, because the incomming connection will come from a "different" machine (the docker container).
This works in any scenario.
Install xhost if you don't have it. Then, in bash,
export DISPLAY=:0.0
xhost +local:docker
After this run your docker run command (or whatever docker command you are running) with -e DISPLAY=$DISPLAY
It works usually via https://stackoverflow.com/a/61060528/429476
But if you are running docker with a different user than the one used for ssh -X into the server with; then copying the Xauthority only helped along with volume mapping the file.
Example - I sshed into the server with alex user.Then ran docker after su -root and got this error
X11 connection rejected because of wrong authentication.
After copying the .XAuthoirty file and mapping it like https://stackoverflow.com/a/51209546/429476 made it work
cp /home/alex/.Xauthority .
docker run -it --network=host --env DISPLAY=$DISPLAY --privileged \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /tmp/.X11-unix:/tmp/.X11-unix --rm <dockerimage>
More details on wiring here https://unix.stackexchange.com/a/604284/121634
Some clarifying remarks. Host is A, local machine is B
Ive edited this post to note things that I think should work in theory but haven't been tested, vs things I know to work
Running docker non-interactively
If your docker is running not interactively and running sshd, you can use jumphosts or proxycommand and specify the x11 client to run. You should NOT volume share your Xauthority file with the container, and sharing -e DISPLAY likely has no effect on future ssh sessions
Since you essentially have two sshd servers, either of the following should work out of the box
if you have openssh-client greater than version 7.3, you can use the following command
ssh -X -J user-on-host#hostmachine,user-on-docker#dockercontainer xeyes
If your openssh client is older, the syntax is instead
(google says the -X is not needed in the proxy command, but I am suspicious)
ssh -X -o ProxyCommand="ssh -W %h:%p user-on-host#hostmachine" user-on-docker#dockermachine xeyes
Or ssh -X into host, then ssh -X into docker.
In either of the above cases, you should NOT share .Xauthority with the container
Running docker interactively from within the ssh session
The easiest way to get this done is to set --net=host and X11UseLocalhost yse.
If your docker is running sshd, you can open a second ssh -X session on your local machine and use the jumphost method as above.
If you start it in the ssh session, you can either -e DISPLAY=$DISPLAY or export it when you're in. You might have to export it if you attach to an exiting container where this line wasn't used.
Use these docker args for --net host and x11uselocalhost yes
ssh -X to host
-e DISPLAY=$DISPLAY
-v $HOME/.Xauthority:/home/same-as-dash-u-user/.Xauthority
-u user
What follows is explanation of how everything works and other approaches to try
About Xauthority
ssh -X/-Y set up a session key in the hosts Xauthority file, and then sets up a listen port on which it places an x11 proxy that uses the session key, and converts it to be compatible with the key on your local machine. By design, the .Xauthority keys will be different between your local machine and the host machine. If you use jumphosts/proxycommand the keys between the host and the container will yet again be different from each other. If you instead use ssh tunnels or direct X11 connection, you will have to share the host Xauthority with the container, in the case of sharing .Xauthority with the container, you can only have one active session per user, since new sessions will invalidate the previous ones by modifying the hosts .Xauthority such that it only works with that session's ssh x11 proxy
X11UserLocalhost no theory##
Even Though X11UseLocalhost no causes the x server to listen on the wildcard address, With --net host I could not redirect the container display to localhost:X.Y where x and why are from the host $DISPLAY
X11UseLocalhost yes is the easy way
If you choose X11UseLocalhost yes the DISPLAY variable on the host becomes localhost:X:Y, which causes the ssh x11 proxy to listen only on localhost port x.
If X11UseLocalhost is no, the DISPLAY variable on the host becomes the host's hostname:X:Y, which causes the xerver to listen on 0.0.0.0:6000+X and causes xclients to reach out over the network to the hostname specified.
this is theoretical, I don't yet have access to docker on a remote host to test this
But this is the easy way. We bypass that by redirecting the DISPLAY variable to always be localhost, and do docker port mapping to move the data from localhost:X+1.Y on the container, to localhost:X.Y on the host, where ssh is waiting to forward x traffic back to the local machine. The +1 makes us agnostic to running either --net=host or --net=bridge
setting up container ports requires specifying expose in the dockerfile and publishing the ports with the -p command.
Setting everything up manually without ssh -X
This works only with --net host. This approach works without xauth because we are directly piping to your unix domain socket on the local machine
ssh to host without -X
ssh -R6010:localhost:6010 user#host
start docker with -e DISPLAY=localhost:10.1 or export inside
in another terminal on local machine
socat -d -d TCP-LISTEN:6010,fork UNIX-CONNECT:/tmp/.X11-unix/X0
In original terminal run xclients
if container is net --bridged and you can't use docker ports, enable sshd on the container and use the jumphosts method
I am trying to execute the container named redis which is running right now.But the error Could not connect to Redis at redis:6379: Name or service not known. Any one please hell me to figure out the issue and fix it.
This is because both the containers are not in same network, Add a network property inside service name and make sure its same for both
redis:
networks:
- redis-net
Naming the container doesn't alter your hosts file or DNS, and depending on how you ran the container it may not be accessible via the standard port as Docker does port translation.
Run docker inspect redis and examine the ports output, it will tell you what port it is accessible on as well as the IP. Note, however, that this will only be connectable over that IP from that host. To access it from off of the host you will need to use the port from the above command and the host's IP address. That assumes your local firewall rules allow it, which are beyond the scope of this site.
Try below command
src/redis-cli -h localhost -p 6379
I'm "dockerizing" an app which does UDP broadcast heartbeating on a known port. This is with docker-engine-1.7.0 on a variety of hosts (Fedora, Centos7, SLES 12).
I notice that the 'docker0' bridge on the docker host and 'eth0' inside the container each have a broadcast address of 0.0.0.0.
Assuming admin privilege on the host I can manually set the broadcast address on docker0. Likewise in the container (if the container is running privileged or with NET_ADMIN, NET_BROADCAST), but I'm curious why the broadcast address isn't set by default. Is there a configuration option I'm missing for Docker to do this automatically?
Host:
# ifconfig docker0 broadcast 172.17.255.255 up
# tcpdump -i docker0 -p 5000
Container:
# ifconfig eth0 broadcast 172.17.255.255 up
# echo "Hello world" | socat - UDP-DATAGRAM:172.17.255.255:5000,broadcast
Broadcast from the host to the container also works once the broadcast addresses are set.
if you are passing NET_ADMIN to the Docker container, I would not use the docker0 network at all for your application.
If I understood correctly what you are trying to do, the UDP broadcast heartbeating on a known port is used by Docker containers that belong to different hosts to find each other, and not by different docker containers in the same host.
I would then recommend to use --net=host:
docker run --net=host --cap-add NET_ADMIN ....
Like this if you get a shell into the docker container, you will see that the network environment is exactly the same one of the host that is running the containers. If your application was running on that server earlier using UDP broadcast, it will work exactly in the same way in the docker container.
i've got some docker conatiners and now I want to access into one with ssh. Thats working I got a connection via ssh to the docker container.
But now I have the problem I don't know with which user I can access into this container?
I've tried it with both users I have on the host machine (web & root). But they don't work.
What to do know?
You can drop directly into a running container with:
$ docker exec -it myContainer /bin/bash
You can get a shell on a container that is not running with:
$ docker run -it myContainer /bin/bash
This is the preferred method of getting a shell on a container. Running an SSH server is considered not a good practice and, although there are some use cases out there, should be avoided when possible.
If you want to connect directly into a Docker Container, without connecting to the docker host, your Dockerfile should include the following:
# SSH login fix. Otherwise user is kicked off after login
RUN echo 'root:pass' | chpasswd
RUN mkdir /var/run/sshd
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then use docker run with -p and -d flags. Example:
docker run -p 8022:22 -d your-docker-image
You can connect with:
ssh root#your-host -p8022
1.issue the command docker inspect (containerId or name)
You will get a result like this
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"my_bridge": {
"IPAMConfig": {
"IPv4Address": "172.17.0.20"
},
"Links": null,
"Aliases": [
"3784372432",
"xxx",
"xxx2"
],
"NetworkID": "ff7ea463ae3e6e6a099e0e044610cdcdc45b21f7e8c77a814aebfd3b2becd306",
"EndpointID": "6be4ea138f546b030bb08cf2c8af0f637e8e4ba81959c33fb5125ea0d93af967",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.20",
"IPPrefixLen": 24,
...
read out and copy the IP address from there, connect to it via command ssh existingUser#IpAddress , eg
someExistingUser#172.17.0.20. If the user doesn't exist, create him in the guest image, preferably with the sudo privileges. Probably don't use a root user directly, since as far as I know, that user is preset for connecting to the image via ssh keys, or has a preset password and changing it would probably end up in not being able to ssh connect to the image terminal via a regular way of doing it docker exec -it containerName /bin/bash or docker-compose exec containerName /bin/bash
For some case, enabling SSH in docker container is useful, specially when we want to test some scripts.
The link bellow give a good example how to create and image with ssh enabled and how to get it's IP and connect to it.
Here
If a true SSH connection into the container is needed (i.e. to allow isolated access over the internet), this image from the linuxserver.io guys could be a great solution: https://hub.docker.com/r/linuxserver/openssh-server
Much more robust solution is pulling down nsenter to your sever, then sshing in and running docker-enter from there. That way you don't need to run multiple processes in the container (ssh server + whatever the container is for), or worry about all the extra overhead of ssh users and such (not to mention security concerns).
The idea behind containers is that a container runs a single process so that it can be monitored by the daemon. If this process stops || fails for some reason, it can be restarted depending on your preference in your config. An ssh server is a running process. Therefore, if you need ssh access to your setup, make an ssh server service, which can share Volumes with other containers that are running alongside it in the setup.
To open a shell on a container in a host directly:
Imagine you are on your PC at home and you have a remote machine that runs docker and has running containers, and you want to open a shell on the container directly without "stopping by" on the remote host:
(The -t flag exposes tty)
ssh -t user#remote.host 'docker exec -it running_container_name /bin/bash'
If you are already on the host, like the accepted answer:
(The -i interactive -t tty)
docker exec -it running_container_name /bin/bash