Heads up: I'm a novice in both general web administration and Docker. My errors could be caused by something very stupid.
I am running Docker for Windows Server 2016 (the native variant). I have pulled and built a simple Docker base image with Nano Server and Apache 2.4 (nanoserver/apache24). I have made a container from this image and mapped the container port 80 to my local port 8082.
From inside the container, I can use Invoke-WebRequest -uri http://localhost:80 and retrieve the default apache document. However, I would also expect Invoke-WebRequest -uri http://localhost:8082 from outside the container to retrieve the same file. This does not work. I have also tried using the container NAT address, running Invoke-WebRequest -uri http://172.23.58.7:8082. This does not work neither. What is it that I have misconfigured here?
Screenshot from my process below. PowerShell in host computer on the left, PowerShell inside container on the right.
EDIT: #Grimmy asked me in the comment section whether I do have EXPOSE 80 in my Dockerfile and whether docker ps command displays my container with the expected port mapping. It's yes on both counts. My container runs with arguments -d -it because it was a quick Google fix to the problem where the container exits immediately after launch. I know -i "keeps STDIN open" and -t "allocates pseudo-tty", but I frankly don't understand what either of those imply or whether it could be relevant to the problem.
EDIT2: I did not explicitly mention this in the original post, but it's worth noting that netstat -a -o does not display a PID listening on port 8082. I would expect this to be the case. Should it be the case?
The first 50 lines or so of output is displayed in the screenshot.
I got the answer from 'artisticcheese' in the Docker Forums.
You can not connect to mapped IP address within container host itself.
it’s bug in Windows implementation of WinNAT. You need get private IP
address of container and connect to it using port number. Or you can
access through public mapped port from another host on the same
network, that shall work.
I connected from another host, and it worked immediately. I frankly don't know what the "private IP address" of the container is as opposed to the NAT address, so I couldn't proceed on the other tip.
Related
We have an application which uses SSH to copy artifact from one node to other. While creating the Docker image (Linux Centos 8 based), I have installed the Openssh server and client, when I run the image from Docker command and exec into it, I am successfully able to run the SSH command and I also see the port 22 enabled and listening ( $ lsof -i -P -n | grep LISTEN).
But if I start a POD/Container using the same image in the Kubernetes cluster, I do not see port 22 enabled and listening inside the container. Even if I try to start the sshd from inside the k8s container then it gives me below error:
Redirecting to /bin/systemctl start sshd.service Failed to get D-Bus connection: Operation not permitted.
Is there any way to start the K8s container with SSH enabled?
There are three things to consider:
Like David said in his comment:
I'd redesign your system to use a communication system that's easier
to set up, like with HTTP calls between pods.
If you put a service in front of your deployment, it is not going to relay any SSH connections. So you have to point to the pods directly, which might be pretty inconvenient.
In case you have missed that: you need to declare port 22 in your deployment template.
Please let me know if that helped.
First off: I have read the answers to similar questions on SO, but none of them worked.
IMPORTANT NOTE: The answer below is still valid, but maybe jump to the end for an alternative.
The situation:
App with GUI is running in a docker container (CentOS 7.1) under Arch Linux. (machine A)
Machine A has a monitor connected to it.
I want to access this GUI via X11 forwarding on my Arch Linux client machine. (machine B)
What works:
GUI works locally on machine A (with /tmp/.X11-unix being mounted in the Docker container).
X11 forwarding of any app running outside of docker (X11 forwarding is set up and running properly for non-docker usage).
I can even switch the user while remotely logged in, copy the .Xauthority file to the other user and X11 forwarding works as well.
Some setup info:
Docker networking is 'bridged'.
Container can reach host (firewall is open).
DISPLAY variable is set in container (to host-ip-addr:10.0 because of TCP port 6010 where sshd is listening).
Packets to X forward port (6010) are reaching the host from the container (tcpdump checked).
What does not work:
X11 forwarding of the Docker app
Errors:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: host-ip-addr:10.0
Things i tried:
starting client ssh with ssh -Y option on machine B
putting "X11ForwardTrusted yes" in ssh_config on machine B
xhost + (so allow any clients to connect) on machine B
putting Host * in ssh_config on machine B
putting X11UseLocalhost no in sshd_config on machine A (to allow non-localhost clients)
Adding the X auth token in the container with xauth add from the login user on machine A
Just copying over the .Xauthority file from a working user into the container
Making shure .Xauthority file has correct permissions and owner
How can i just disable all the X security stuff and get this working?
Or even better: How can i get it working with security?
Is there at least a way to enable extensive debugging to see where exactly the problem is?
Alternative: The first answer below shows how to effectively resolve this issue. However: I would recommend you to look into a different approach all together, namely VNC. I personally switched to a tigerVNC setup that replaces the X11 forwarding and have not looked back. The performance is just leagues above what X11 forwarding delivered for me. There might be some instances where you cannot use VNC for whatever reason, but i would try it first.
The general setup is now as follows:
-VNC server runs on machine A on the host (not inside a docker container).
-Now you just have to figure out how to get a GUI for inside a docker container (which is a much more trivial undertaking).
-If the docker container was started NOT from the VNC environment, the DISPLAY variable maybe needs ajdusting.
Thanks so much #Lazarus535
I found that for me adding the following to my docker command worked:
--volume="$HOME/.Xauthority:/root/.Xauthority:rw"
I found this trick here
EDIT:
As Lazarus pointed out correctly you also have to set the --net=host option to make this work.
Ok, here is the thing:
1) Log in to remote machine
2) Check which display was set with echo $DISPLAY
3) Run xauth list
4) Copy the line corresponding to your DISPLAY
5) Enter your docker container
6) xauth add <the line you copied>*
7) Set DISPLAY with export DISPLAY=<ip-to-host>:<no-of-display>
*so far so good right?
This was nothing new...however here is the twist:
The line printed by xauth list for the login user looks something like this (in my case):
<hostname-of-machine>/unix:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
Because i use the bridged docker setup, the X forwarding port is not listening locally, because the sshd is not running in the container. Change the line above to:
<ip-of-host>:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
In essence: Remove the /unix part.
<ip-of-host> is the IP address where the sshd is running.
Set the DISPLAY variable as above.
So the error was that the DISPLAY name in the environment variable was not the "same" as the entry in the xauth list / .Xauthority file and the client could therefor not authenticate properly.
I switched back to an untrusted X11 forwarding setting.
The X11UseLocalhost no setting in the sshd_config file however is important, because the incomming connection will come from a "different" machine (the docker container).
This works in any scenario.
Install xhost if you don't have it. Then, in bash,
export DISPLAY=:0.0
xhost +local:docker
After this run your docker run command (or whatever docker command you are running) with -e DISPLAY=$DISPLAY
It works usually via https://stackoverflow.com/a/61060528/429476
But if you are running docker with a different user than the one used for ssh -X into the server with; then copying the Xauthority only helped along with volume mapping the file.
Example - I sshed into the server with alex user.Then ran docker after su -root and got this error
X11 connection rejected because of wrong authentication.
After copying the .XAuthoirty file and mapping it like https://stackoverflow.com/a/51209546/429476 made it work
cp /home/alex/.Xauthority .
docker run -it --network=host --env DISPLAY=$DISPLAY --privileged \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /tmp/.X11-unix:/tmp/.X11-unix --rm <dockerimage>
More details on wiring here https://unix.stackexchange.com/a/604284/121634
Some clarifying remarks. Host is A, local machine is B
Ive edited this post to note things that I think should work in theory but haven't been tested, vs things I know to work
Running docker non-interactively
If your docker is running not interactively and running sshd, you can use jumphosts or proxycommand and specify the x11 client to run. You should NOT volume share your Xauthority file with the container, and sharing -e DISPLAY likely has no effect on future ssh sessions
Since you essentially have two sshd servers, either of the following should work out of the box
if you have openssh-client greater than version 7.3, you can use the following command
ssh -X -J user-on-host#hostmachine,user-on-docker#dockercontainer xeyes
If your openssh client is older, the syntax is instead
(google says the -X is not needed in the proxy command, but I am suspicious)
ssh -X -o ProxyCommand="ssh -W %h:%p user-on-host#hostmachine" user-on-docker#dockermachine xeyes
Or ssh -X into host, then ssh -X into docker.
In either of the above cases, you should NOT share .Xauthority with the container
Running docker interactively from within the ssh session
The easiest way to get this done is to set --net=host and X11UseLocalhost yse.
If your docker is running sshd, you can open a second ssh -X session on your local machine and use the jumphost method as above.
If you start it in the ssh session, you can either -e DISPLAY=$DISPLAY or export it when you're in. You might have to export it if you attach to an exiting container where this line wasn't used.
Use these docker args for --net host and x11uselocalhost yes
ssh -X to host
-e DISPLAY=$DISPLAY
-v $HOME/.Xauthority:/home/same-as-dash-u-user/.Xauthority
-u user
What follows is explanation of how everything works and other approaches to try
About Xauthority
ssh -X/-Y set up a session key in the hosts Xauthority file, and then sets up a listen port on which it places an x11 proxy that uses the session key, and converts it to be compatible with the key on your local machine. By design, the .Xauthority keys will be different between your local machine and the host machine. If you use jumphosts/proxycommand the keys between the host and the container will yet again be different from each other. If you instead use ssh tunnels or direct X11 connection, you will have to share the host Xauthority with the container, in the case of sharing .Xauthority with the container, you can only have one active session per user, since new sessions will invalidate the previous ones by modifying the hosts .Xauthority such that it only works with that session's ssh x11 proxy
X11UserLocalhost no theory##
Even Though X11UseLocalhost no causes the x server to listen on the wildcard address, With --net host I could not redirect the container display to localhost:X.Y where x and why are from the host $DISPLAY
X11UseLocalhost yes is the easy way
If you choose X11UseLocalhost yes the DISPLAY variable on the host becomes localhost:X:Y, which causes the ssh x11 proxy to listen only on localhost port x.
If X11UseLocalhost is no, the DISPLAY variable on the host becomes the host's hostname:X:Y, which causes the xerver to listen on 0.0.0.0:6000+X and causes xclients to reach out over the network to the hostname specified.
this is theoretical, I don't yet have access to docker on a remote host to test this
But this is the easy way. We bypass that by redirecting the DISPLAY variable to always be localhost, and do docker port mapping to move the data from localhost:X+1.Y on the container, to localhost:X.Y on the host, where ssh is waiting to forward x traffic back to the local machine. The +1 makes us agnostic to running either --net=host or --net=bridge
setting up container ports requires specifying expose in the dockerfile and publishing the ports with the -p command.
Setting everything up manually without ssh -X
This works only with --net host. This approach works without xauth because we are directly piping to your unix domain socket on the local machine
ssh to host without -X
ssh -R6010:localhost:6010 user#host
start docker with -e DISPLAY=localhost:10.1 or export inside
in another terminal on local machine
socat -d -d TCP-LISTEN:6010,fork UNIX-CONNECT:/tmp/.X11-unix/X0
In original terminal run xclients
if container is net --bridged and you can't use docker ports, enable sshd on the container and use the jumphosts method
I'm using IBM Bluemix and Docker.
[My goal] I want to create a container. I found from the website that we could use SSH to login as "root" user. So I guess I could also install maven and MySQL on this container. Though IBM Container is a Docker based file system, we could treat container just like a Linux virtual machine (please correct me if wrong).
I found a similar question here, where njleviere said that port 22 is closed. How do I determine if a port is open or closed? If it's closed, how do I open it? Also, I think that port 22 is actually open in my case.
[Problem Description] I mainly followed this website, but I'm using Ubuntu and SSH instead of Putty.
First, I create the key file with ssh-keygen. For the filename, I tried "cloud" and "cloud.key". Both failed. So I think the name for the key does not matter (please correct me if wrong).
I open the .pub key. There is a "yu#yu-VirtualBox" tag at the end of the key file. I am not sure if I should include this tag. So I tried several things:
ssh-rsa KeyString yu#yu-VirtualBox
ssh-rsa KeyString
KeyString
All failed.
Then I created the container. I choose the "ibmliberty". Given the public IP I created before (already unbind from any containers), I added 22 to the public Port. And pasted the "cloud.pub" to the SSH key. After several minutes, the container started to run. The following two links are the screen shot for the Bluemix console on creating the container.
Then I could see the default page for port 9080 in browser for https://169.44.124.121:9080. It said "Welcome to Liberty" and "WebSphere Application Server V8.5.5.9".
Then I typed (cloud and cloud.pub is the key file)
ssh -i cloud root#169.44.124.121
Then I get the
ssh: connect to host 169.44.124.121 port 22: Connection refused
I used cf ic ps to check the port. It looks fine.
I see 169.44.124.121:22->22/tcp under the PORTS.
Also, I see many programmers use the docker file to launch the IBM Container. Should I switch to docker file instead of this IBM console web interface?
The default ibm-liberty image on bluemix doesn't include sshd. You could add it - you'll need to add supervisord, sshd, and the appropriate configuration for both into your Dockerfile.
Conversely, if what you really want is just a secure command line connection into your container, you can use cf ic exec or docker exec. (e.g. cf ic exec -ti mycontainername bash ) That'll give you a command line without having the overhead (and security exposure) of a running sshd.
I've been following this tutorial for beginners about docker which basically instructs you to create an apache container and map a localhost port to the one on the container.
when I try localhost:80 it doesn't connect, although the container is up and running.
I even made a rule in the firewall to allow connection to port 80, but couldn't get connected to the localhost.
Any ideas ?
On Windows/OS X, Docker is running inside a Linux virtual machine (Docker Toolbox) with a default IP address of 192.168.99.100. Thus, when you use docker run -p 80:80 to bind the container port to host port, it in fact binds to the virtual machine's port 80. Thus the address you need is http://192.168.99.100.
The 172.17.0.3 address is the address of the docker container inside that virtual machine, and is not accessible directly from Windows/OS X.
Add a line to your DockerFile before restarting apache.
RUN echo 'ServerName localhost' >> /etc/apache2/apache2.conf
I stumbled upon this question as I was looking for a way to bind my local HTTP port (80) to the HTTP port of my container, an Apache container running on Docker Desktop for Windows - through WSL2 (this is important)
I couldn't find a quick and easy way to do this, so I figured it out myself.
What you must do is bind your local port (on Windows) to the port on WSL.
Here is how I did it :
$wsl_ip = (wsl -d "docker-desktop" -- "ifconfig" "eth0" "|" "grep" "inet addr:").trim("").split(":").split()[2]
netsh interface portproxy add v4tov4 listenport=443 listenaddress=0.0.0.0 connectport=443 connectaddress=$wsl_ip
netsh interface portproxy add v4tov4 listenport=80 listenaddress=0.0.0.0 connectport=80 connectaddress=$wsl_ip
You can either create a Powershell Script (.ps1) and run it with Powershell, or copy/paste each command line into Windows Terminal / Powershell running with Administrator Privileges.
What this does is :
attach to the "docker-desktop" distribution running in WSL2 2
run "ifconfig eth0 | grep inet addr:" to get the local IP address of
the "virtual machine"
parse the result, and use Netsh to
create a portproxy between port 80 of your Windows machine and port
80 of your Linux machine. Same is done for port 443. You can easily
map other ports if you understand what the command is doing.
More explanation :
Since Docker for Windows 10/11 uses WSL2, when you expose a port (through docker-compose or with an EXPOSE command in your Dockerfile), it is exposed to a Linux Distribution called "docker-desktop" that is ran with WSL2. For some reason, ports 80 and 443 that are exposed from a container are NOT forwarded to the host.
The official documentation acknoledges some issues but their solution is just to use another port (for example, 8080 mapped to 80).
Issues with this method :
Each time you reboot your system (or WSL2), the Linux machine gets assigned a new IP and you have to do it again. What you could do is setup a command to run when your container starts that connects through ssh to the host and runs the script, but I'm too lazy to have done it myself.
I am trying to execute the container named redis which is running right now.But the error Could not connect to Redis at redis:6379: Name or service not known. Any one please hell me to figure out the issue and fix it.
This is because both the containers are not in same network, Add a network property inside service name and make sure its same for both
redis:
networks:
- redis-net
Naming the container doesn't alter your hosts file or DNS, and depending on how you ran the container it may not be accessible via the standard port as Docker does port translation.
Run docker inspect redis and examine the ports output, it will tell you what port it is accessible on as well as the IP. Note, however, that this will only be connectable over that IP from that host. To access it from off of the host you will need to use the port from the above command and the host's IP address. That assumes your local firewall rules allow it, which are beyond the scope of this site.
Try below command
src/redis-cli -h localhost -p 6379