We have an application which uses SSH to copy artifact from one node to other. While creating the Docker image (Linux Centos 8 based), I have installed the Openssh server and client, when I run the image from Docker command and exec into it, I am successfully able to run the SSH command and I also see the port 22 enabled and listening ( $ lsof -i -P -n | grep LISTEN).
But if I start a POD/Container using the same image in the Kubernetes cluster, I do not see port 22 enabled and listening inside the container. Even if I try to start the sshd from inside the k8s container then it gives me below error:
Redirecting to /bin/systemctl start sshd.service Failed to get D-Bus connection: Operation not permitted.
Is there any way to start the K8s container with SSH enabled?
There are three things to consider:
Like David said in his comment:
I'd redesign your system to use a communication system that's easier
to set up, like with HTTP calls between pods.
If you put a service in front of your deployment, it is not going to relay any SSH connections. So you have to point to the pods directly, which might be pretty inconvenient.
In case you have missed that: you need to declare port 22 in your deployment template.
Please let me know if that helped.
I'm in a team where some of us use docker toolbox and some user docker desktop. We're writing an application that needs to communicate to a docker container in development.
On docker toolbox, I know the docker-machine env command sets the docker host environment variable and I can use that to get the ip of the virtual machine that's running the docker engine. From there I just access the exposed ports.
What's the equivalent way to get that information on docker desktop? (I do not have a machine that has docker desktop, only docker toolbox but I'm writing code that should be able to access the docker container on both)
On windows OS, after installed docker, there is an entry added by docker inside your hosts file (C:\Windows\System32\drivers\etc\hosts), which states the IP as:
Added by Docker Desktop
10.xx.xx.xx host.docker.internal
Below section got added in my /etc/hosts:
# Added by Docker Desktop
192.168.99.1 host.docker.internal
192.168.99.1 gateway.docker.internal
Then I was able to access by adding the port to which the app was bind to.
This command should display the IP
ping -q -c 1 docker.local | sed -En "s/^.*\((.+)\).*$/\1/p"
ipconfig can get you this information as well
I have a very strange issue on my local development environment. I have a couple of Docker containers that run a couple of different environments, but both fronted with Apache. Both are connected to the same bridge network and one has port 80 exposed and the other port 8010. When the containers are running I can connect using telnet as follows:
telnet localhost 80
or
telnet localhost 8010
However, from the browser, nothing happens and in the end, it just times out. In the logs on the Docker contains there is nothing to show an inbound connection.
From the Docker containers shell, I can access the HTTP server using curl without issue.
I tried deleting the bridge network and adding it again but that didn't help.
I've tried turning the macOS firewall off but that doesn't help.
If I stop the docker containers and then try the above telnet command it errors with "Connection refused" as would be expected, so the telnet command is definitely connecting to the docker container.
Also, this setup has been working fine for sometime until today.
I'm lost as to what to try next and have found nothing similar Googling.
Any ideas of how to resolve this would be gratefully received.
To resolve this I did:
docker-compose rm -f
docker images --no-trunc --format '{{.ID}}' | xargs docker rmi
and then rebuilt the images / containers.
Be careful with the above as they are destructive commands.
First off: I have read the answers to similar questions on SO, but none of them worked.
IMPORTANT NOTE: The answer below is still valid, but maybe jump to the end for an alternative.
The situation:
App with GUI is running in a docker container (CentOS 7.1) under Arch Linux. (machine A)
Machine A has a monitor connected to it.
I want to access this GUI via X11 forwarding on my Arch Linux client machine. (machine B)
What works:
GUI works locally on machine A (with /tmp/.X11-unix being mounted in the Docker container).
X11 forwarding of any app running outside of docker (X11 forwarding is set up and running properly for non-docker usage).
I can even switch the user while remotely logged in, copy the .Xauthority file to the other user and X11 forwarding works as well.
Some setup info:
Docker networking is 'bridged'.
Container can reach host (firewall is open).
DISPLAY variable is set in container (to host-ip-addr:10.0 because of TCP port 6010 where sshd is listening).
Packets to X forward port (6010) are reaching the host from the container (tcpdump checked).
What does not work:
X11 forwarding of the Docker app
Errors:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: host-ip-addr:10.0
Things i tried:
starting client ssh with ssh -Y option on machine B
putting "X11ForwardTrusted yes" in ssh_config on machine B
xhost + (so allow any clients to connect) on machine B
putting Host * in ssh_config on machine B
putting X11UseLocalhost no in sshd_config on machine A (to allow non-localhost clients)
Adding the X auth token in the container with xauth add from the login user on machine A
Just copying over the .Xauthority file from a working user into the container
Making shure .Xauthority file has correct permissions and owner
How can i just disable all the X security stuff and get this working?
Or even better: How can i get it working with security?
Is there at least a way to enable extensive debugging to see where exactly the problem is?
Alternative: The first answer below shows how to effectively resolve this issue. However: I would recommend you to look into a different approach all together, namely VNC. I personally switched to a tigerVNC setup that replaces the X11 forwarding and have not looked back. The performance is just leagues above what X11 forwarding delivered for me. There might be some instances where you cannot use VNC for whatever reason, but i would try it first.
The general setup is now as follows:
-VNC server runs on machine A on the host (not inside a docker container).
-Now you just have to figure out how to get a GUI for inside a docker container (which is a much more trivial undertaking).
-If the docker container was started NOT from the VNC environment, the DISPLAY variable maybe needs ajdusting.
Thanks so much #Lazarus535
I found that for me adding the following to my docker command worked:
--volume="$HOME/.Xauthority:/root/.Xauthority:rw"
I found this trick here
EDIT:
As Lazarus pointed out correctly you also have to set the --net=host option to make this work.
Ok, here is the thing:
1) Log in to remote machine
2) Check which display was set with echo $DISPLAY
3) Run xauth list
4) Copy the line corresponding to your DISPLAY
5) Enter your docker container
6) xauth add <the line you copied>*
7) Set DISPLAY with export DISPLAY=<ip-to-host>:<no-of-display>
*so far so good right?
This was nothing new...however here is the twist:
The line printed by xauth list for the login user looks something like this (in my case):
<hostname-of-machine>/unix:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
Because i use the bridged docker setup, the X forwarding port is not listening locally, because the sshd is not running in the container. Change the line above to:
<ip-of-host>:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
In essence: Remove the /unix part.
<ip-of-host> is the IP address where the sshd is running.
Set the DISPLAY variable as above.
So the error was that the DISPLAY name in the environment variable was not the "same" as the entry in the xauth list / .Xauthority file and the client could therefor not authenticate properly.
I switched back to an untrusted X11 forwarding setting.
The X11UseLocalhost no setting in the sshd_config file however is important, because the incomming connection will come from a "different" machine (the docker container).
This works in any scenario.
Install xhost if you don't have it. Then, in bash,
export DISPLAY=:0.0
xhost +local:docker
After this run your docker run command (or whatever docker command you are running) with -e DISPLAY=$DISPLAY
It works usually via https://stackoverflow.com/a/61060528/429476
But if you are running docker with a different user than the one used for ssh -X into the server with; then copying the Xauthority only helped along with volume mapping the file.
Example - I sshed into the server with alex user.Then ran docker after su -root and got this error
X11 connection rejected because of wrong authentication.
After copying the .XAuthoirty file and mapping it like https://stackoverflow.com/a/51209546/429476 made it work
cp /home/alex/.Xauthority .
docker run -it --network=host --env DISPLAY=$DISPLAY --privileged \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /tmp/.X11-unix:/tmp/.X11-unix --rm <dockerimage>
More details on wiring here https://unix.stackexchange.com/a/604284/121634
Some clarifying remarks. Host is A, local machine is B
Ive edited this post to note things that I think should work in theory but haven't been tested, vs things I know to work
Running docker non-interactively
If your docker is running not interactively and running sshd, you can use jumphosts or proxycommand and specify the x11 client to run. You should NOT volume share your Xauthority file with the container, and sharing -e DISPLAY likely has no effect on future ssh sessions
Since you essentially have two sshd servers, either of the following should work out of the box
if you have openssh-client greater than version 7.3, you can use the following command
ssh -X -J user-on-host#hostmachine,user-on-docker#dockercontainer xeyes
If your openssh client is older, the syntax is instead
(google says the -X is not needed in the proxy command, but I am suspicious)
ssh -X -o ProxyCommand="ssh -W %h:%p user-on-host#hostmachine" user-on-docker#dockermachine xeyes
Or ssh -X into host, then ssh -X into docker.
In either of the above cases, you should NOT share .Xauthority with the container
Running docker interactively from within the ssh session
The easiest way to get this done is to set --net=host and X11UseLocalhost yse.
If your docker is running sshd, you can open a second ssh -X session on your local machine and use the jumphost method as above.
If you start it in the ssh session, you can either -e DISPLAY=$DISPLAY or export it when you're in. You might have to export it if you attach to an exiting container where this line wasn't used.
Use these docker args for --net host and x11uselocalhost yes
ssh -X to host
-e DISPLAY=$DISPLAY
-v $HOME/.Xauthority:/home/same-as-dash-u-user/.Xauthority
-u user
What follows is explanation of how everything works and other approaches to try
About Xauthority
ssh -X/-Y set up a session key in the hosts Xauthority file, and then sets up a listen port on which it places an x11 proxy that uses the session key, and converts it to be compatible with the key on your local machine. By design, the .Xauthority keys will be different between your local machine and the host machine. If you use jumphosts/proxycommand the keys between the host and the container will yet again be different from each other. If you instead use ssh tunnels or direct X11 connection, you will have to share the host Xauthority with the container, in the case of sharing .Xauthority with the container, you can only have one active session per user, since new sessions will invalidate the previous ones by modifying the hosts .Xauthority such that it only works with that session's ssh x11 proxy
X11UserLocalhost no theory##
Even Though X11UseLocalhost no causes the x server to listen on the wildcard address, With --net host I could not redirect the container display to localhost:X.Y where x and why are from the host $DISPLAY
X11UseLocalhost yes is the easy way
If you choose X11UseLocalhost yes the DISPLAY variable on the host becomes localhost:X:Y, which causes the ssh x11 proxy to listen only on localhost port x.
If X11UseLocalhost is no, the DISPLAY variable on the host becomes the host's hostname:X:Y, which causes the xerver to listen on 0.0.0.0:6000+X and causes xclients to reach out over the network to the hostname specified.
this is theoretical, I don't yet have access to docker on a remote host to test this
But this is the easy way. We bypass that by redirecting the DISPLAY variable to always be localhost, and do docker port mapping to move the data from localhost:X+1.Y on the container, to localhost:X.Y on the host, where ssh is waiting to forward x traffic back to the local machine. The +1 makes us agnostic to running either --net=host or --net=bridge
setting up container ports requires specifying expose in the dockerfile and publishing the ports with the -p command.
Setting everything up manually without ssh -X
This works only with --net host. This approach works without xauth because we are directly piping to your unix domain socket on the local machine
ssh to host without -X
ssh -R6010:localhost:6010 user#host
start docker with -e DISPLAY=localhost:10.1 or export inside
in another terminal on local machine
socat -d -d TCP-LISTEN:6010,fork UNIX-CONNECT:/tmp/.X11-unix/X0
In original terminal run xclients
if container is net --bridged and you can't use docker ports, enable sshd on the container and use the jumphosts method
I installed spinnaker using the command
bash <(curl --silent https://spinnaker.bintray.com/scripts/InstallSpinnaker.sh)
on a local ubuntu machine.
After installation I am not able to connect to the Deck UI of spinnaker using URL: http://localhost:9000
Check logs in /var/log/apache2 for errors, and /etc/apache2/ports.conf to see if it is is listening on 127.0.0.1:9000
The install script should have made those changes for you, but maybe you had a permissions issue or some other kind of local system policy preventing the installation from working properly.