Can only connect to local tmux session through over ssh - ssh

So, I have tmux session running on my local machine, but I can only connect to it (or see information about it) if I ssh back to myself first:
% tmux ls
failed to connect to server: Connection refused
% ssh localhost -t tmux ls
Password:
0: 2 windows (created Mon Nov 26 12:47:44 2012) [208x52] (attached)
Connection to localhost closed.
This isn't the worst hoop to have to jump through, but why is it happening, and how can I fix it?

For its client/server communication, tmux uses a named socket (in a UID-based subdirectory) under the directory specified by the TMPDIR environment variable. If this environment variable is not set (or it is empty), then tmux uses the directory defined by _PATH_TMP from paths.h; this is often /tmp.
Note: The following uses of “session” refer to login sessions, not tmux sessions.
My guess is that your ssh sessions share a common TMPDIR value (possibly not having one at all), while your “normal” sessions use a different TMPDIR value. Since the TMPDIR values are different in your different sessions, a client in one session type can not directly “see” a server started in the other session type (e.g. the client tries using /var/folders/random/directories/tmux-500/default, but the server is listening at /tmp/tmux-500/default).
To fix the problem you can simply adjust your TMPDIR to match whatever it normally is in your ssh sessions:
TMPDIR=$(/usr/bin/ssh localhost -t 'echo $TMPDIR') && export TMPDIR
You can determine the path your client is trying to use like this:
tmux -L temp start\; info | grep path
This will create a evanescent server using a socket named temp instead of default, and show you the path to the socket it is using.

Tmux manages sockets under /var/run/tmux /tmp/tmux-USERID and each of these sockets has a name attached to it
For example:
$ pwd
/tmp/tmux-2037
$ ls
default foo
$ tmux -L foo ls
0: 1 windows (created Tue Dec 4 13:36:10 2012) [172x52]
$ tmux -L default ls
0: 1 windows (created Tue Nov 20 16:21:14 2012) [188x47]
$ tmux ls
0: 1 windows (created Tue Nov 20 16:21:14 2012) [188x47]
Take a look at what you've got under /var/run/tmux /tmp/tmux-USERID and try attaching to some of those sockets by name to see if that's contributing to your problem (running tmux ls is the same as running tmux -L default ls)
If all else fails, it may be worth it to fully detach that tmux (close all windows and exit fully) and then rm /tmp/tmux-500/default to see if there's something stateful about your current problem.

Related

Control where the SSH_AUTH_SOCK agent socket gets created

I run a ssh-agent which works fine in a lot of cases to ssh to some central host and from there to others.
I've one case, the ssh-connection to my ISP, where a
ssh -At user#host
creates on the target host this environment variables:
$ env | grep SSH
SSH_CONNECTION=132.174.172.2 51585 178.254.11.41 22
SSH_AUTH_SOCK=/tmp/ssh-2NIOsqUvTc/agent.30537
SSH_CLIENT=132.174.172.2 51585 22
SSH_TTY=/dev/pts/2
$ ls -l /tmp/ssh-2NIOsqUvTc/agent.30537
ls: cannot access '/tmp/ssh-2NIOsqUvTc/agent.30537': No such file or directory
$ ls -l /tmp
total 0``
Perhaps in the session on host I'm put by the provider into a chroot'ed environment without access to the socket file the ssh daemon created for me...
The question is: can I control somehow where this socket is created, for example in my $HOME?

How to prevent nested tmux session in a new login shell?

Almost all solutions (1, 2) to set tmux run at shell startup depend on some environment variables like $TMUX, $TERM etc. But when we start a login shell such as by su -, all variables are cleared except $TERM. So we can rely on $TERM to avoid starting nested sessions. Let's say the default $TERM is xterm and we set screen in .tmux.conf to identify we are in a TMUX session. This works fine for local login.
Now, two machines A and B use same rule to control nested sessions and we are in a tmux session on machine A. When we login remotely (through ssh) from A to B, tmux session won't start on B because $TERM is already set to screen.
So, isn't there a way to find out that we are already in a tmux session without depending on environment variables?
PS:
I'm posting a workaround as answer that I use to achieve the above said behavior. But a more accurate and better method such as that may work using tmux commands will be much appreciated.
This solution works by finding out if current terminal is connected to a tmux server running on the same machine. In order to find out the connection, we will make use of a pseudoterminal pair and I/O statistics hack.
However it may fail if /procfs or /dev files are not read-able / write-able by the user. For instance if tmux server was launched by root user, a non-root user won't be able to find it.
Also, we may get false positives if tmux server is receiving data from some other source at the same time we are trying to write zeros to it.
Put this at the end of .bashrc or other shell startup file you want:
# ~/.bashrc
# don't waste time if $TMUX environemnt variable is set
[ -z $TMUX ] || return
# don't start a tmux session if current shell is not connected to a terminal
pts=$(tty) || return
# find out processes connected to master pseudoterminal
for ptm in $(fuser /dev/ptmx 2>/dev/null)
do
# ignore process if it's not a tmux server
grep -q tmux /proc/$ptm/comm || continue
# number of bytes already read by tmux server
rchar_old=$(awk '/rchar/ {print $2}' /proc/$ptm/io)
# write out 1000 bytes to current slave pseudoterminal terminal
dd bs=1 count=1000 if=/dev/zero of=$pts &>/dev/null
# read number of bytes again and find difference
diff=$(( $(awk '/rchar/ {print $2}' /proc/$ptm/io) - rchar_old ))
# if it equals 1000, current terminal is connected to tmux server
# however diff comes greater than 1000 most of the times
[ $diff -ge 1000 ] && return
done
# start or attach to a tmux session
echo 'Press any key to interrupt tmux session.'
read -st1 key && return
# connect to a detached session if exists for current user
session=($(tmux list-sessions 2>/dev/null | sed -n '/(attached)/!s/:.*r//p'))
[ -z $session ] || exec tmux a -t ${session[0]}
# start a new session after all
exec tmux

X11 forwarding of a GUI app running in docker

First off: I have read the answers to similar questions on SO, but none of them worked.
IMPORTANT NOTE: The answer below is still valid, but maybe jump to the end for an alternative.
The situation:
App with GUI is running in a docker container (CentOS 7.1) under Arch Linux. (machine A)
Machine A has a monitor connected to it.
I want to access this GUI via X11 forwarding on my Arch Linux client machine. (machine B)
What works:
GUI works locally on machine A (with /tmp/.X11-unix being mounted in the Docker container).
X11 forwarding of any app running outside of docker (X11 forwarding is set up and running properly for non-docker usage).
I can even switch the user while remotely logged in, copy the .Xauthority file to the other user and X11 forwarding works as well.
Some setup info:
Docker networking is 'bridged'.
Container can reach host (firewall is open).
DISPLAY variable is set in container (to host-ip-addr:10.0 because of TCP port 6010 where sshd is listening).
Packets to X forward port (6010) are reaching the host from the container (tcpdump checked).
What does not work:
X11 forwarding of the Docker app
Errors:
X11 connection rejected because of wrong authentication.
xterm: Xt error: Can't open display: host-ip-addr:10.0
Things i tried:
starting client ssh with ssh -Y option on machine B
putting "X11ForwardTrusted yes" in ssh_config on machine B
xhost + (so allow any clients to connect) on machine B
putting Host * in ssh_config on machine B
putting X11UseLocalhost no in sshd_config on machine A (to allow non-localhost clients)
Adding the X auth token in the container with xauth add from the login user on machine A
Just copying over the .Xauthority file from a working user into the container
Making shure .Xauthority file has correct permissions and owner
How can i just disable all the X security stuff and get this working?
Or even better: How can i get it working with security?
Is there at least a way to enable extensive debugging to see where exactly the problem is?
Alternative: The first answer below shows how to effectively resolve this issue. However: I would recommend you to look into a different approach all together, namely VNC. I personally switched to a tigerVNC setup that replaces the X11 forwarding and have not looked back. The performance is just leagues above what X11 forwarding delivered for me. There might be some instances where you cannot use VNC for whatever reason, but i would try it first.
The general setup is now as follows:
-VNC server runs on machine A on the host (not inside a docker container).
-Now you just have to figure out how to get a GUI for inside a docker container (which is a much more trivial undertaking).
-If the docker container was started NOT from the VNC environment, the DISPLAY variable maybe needs ajdusting.
Thanks so much #Lazarus535
I found that for me adding the following to my docker command worked:
--volume="$HOME/.Xauthority:/root/.Xauthority:rw"
I found this trick here
EDIT:
As Lazarus pointed out correctly you also have to set the --net=host option to make this work.
Ok, here is the thing:
1) Log in to remote machine
2) Check which display was set with echo $DISPLAY
3) Run xauth list
4) Copy the line corresponding to your DISPLAY
5) Enter your docker container
6) xauth add <the line you copied>*
7) Set DISPLAY with export DISPLAY=<ip-to-host>:<no-of-display>
*so far so good right?
This was nothing new...however here is the twist:
The line printed by xauth list for the login user looks something like this (in my case):
<hostname-of-machine>/unix:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
Because i use the bridged docker setup, the X forwarding port is not listening locally, because the sshd is not running in the container. Change the line above to:
<ip-of-host>:<no-of-display> MIT-MAGIC-COOKIE-1 <some number here>
In essence: Remove the /unix part.
<ip-of-host> is the IP address where the sshd is running.
Set the DISPLAY variable as above.
So the error was that the DISPLAY name in the environment variable was not the "same" as the entry in the xauth list / .Xauthority file and the client could therefor not authenticate properly.
I switched back to an untrusted X11 forwarding setting.
The X11UseLocalhost no setting in the sshd_config file however is important, because the incomming connection will come from a "different" machine (the docker container).
This works in any scenario.
Install xhost if you don't have it. Then, in bash,
export DISPLAY=:0.0
xhost +local:docker
After this run your docker run command (or whatever docker command you are running) with -e DISPLAY=$DISPLAY
It works usually via https://stackoverflow.com/a/61060528/429476
But if you are running docker with a different user than the one used for ssh -X into the server with; then copying the Xauthority only helped along with volume mapping the file.
Example - I sshed into the server with alex user.Then ran docker after su -root and got this error
X11 connection rejected because of wrong authentication.
After copying the .XAuthoirty file and mapping it like https://stackoverflow.com/a/51209546/429476 made it work
cp /home/alex/.Xauthority .
docker run -it --network=host --env DISPLAY=$DISPLAY --privileged \
--volume="$HOME/.Xauthority:/root/.Xauthority:rw" \
-v /tmp/.X11-unix:/tmp/.X11-unix --rm <dockerimage>
More details on wiring here https://unix.stackexchange.com/a/604284/121634
Some clarifying remarks. Host is A, local machine is B
Ive edited this post to note things that I think should work in theory but haven't been tested, vs things I know to work
Running docker non-interactively
If your docker is running not interactively and running sshd, you can use jumphosts or proxycommand and specify the x11 client to run. You should NOT volume share your Xauthority file with the container, and sharing -e DISPLAY likely has no effect on future ssh sessions
Since you essentially have two sshd servers, either of the following should work out of the box
if you have openssh-client greater than version 7.3, you can use the following command
ssh -X -J user-on-host#hostmachine,user-on-docker#dockercontainer xeyes
If your openssh client is older, the syntax is instead
(google says the -X is not needed in the proxy command, but I am suspicious)
ssh -X -o ProxyCommand="ssh -W %h:%p user-on-host#hostmachine" user-on-docker#dockermachine xeyes
Or ssh -X into host, then ssh -X into docker.
In either of the above cases, you should NOT share .Xauthority with the container
Running docker interactively from within the ssh session
The easiest way to get this done is to set --net=host and X11UseLocalhost yse.
If your docker is running sshd, you can open a second ssh -X session on your local machine and use the jumphost method as above.
If you start it in the ssh session, you can either -e DISPLAY=$DISPLAY or export it when you're in. You might have to export it if you attach to an exiting container where this line wasn't used.
Use these docker args for --net host and x11uselocalhost yes
ssh -X to host
-e DISPLAY=$DISPLAY
-v $HOME/.Xauthority:/home/same-as-dash-u-user/.Xauthority
-u user
What follows is explanation of how everything works and other approaches to try
About Xauthority
ssh -X/-Y set up a session key in the hosts Xauthority file, and then sets up a listen port on which it places an x11 proxy that uses the session key, and converts it to be compatible with the key on your local machine. By design, the .Xauthority keys will be different between your local machine and the host machine. If you use jumphosts/proxycommand the keys between the host and the container will yet again be different from each other. If you instead use ssh tunnels or direct X11 connection, you will have to share the host Xauthority with the container, in the case of sharing .Xauthority with the container, you can only have one active session per user, since new sessions will invalidate the previous ones by modifying the hosts .Xauthority such that it only works with that session's ssh x11 proxy
X11UserLocalhost no theory##
Even Though X11UseLocalhost no causes the x server to listen on the wildcard address, With --net host I could not redirect the container display to localhost:X.Y where x and why are from the host $DISPLAY
X11UseLocalhost yes is the easy way
If you choose X11UseLocalhost yes the DISPLAY variable on the host becomes localhost:X:Y, which causes the ssh x11 proxy to listen only on localhost port x.
If X11UseLocalhost is no, the DISPLAY variable on the host becomes the host's hostname:X:Y, which causes the xerver to listen on 0.0.0.0:6000+X and causes xclients to reach out over the network to the hostname specified.
this is theoretical, I don't yet have access to docker on a remote host to test this
But this is the easy way. We bypass that by redirecting the DISPLAY variable to always be localhost, and do docker port mapping to move the data from localhost:X+1.Y on the container, to localhost:X.Y on the host, where ssh is waiting to forward x traffic back to the local machine. The +1 makes us agnostic to running either --net=host or --net=bridge
setting up container ports requires specifying expose in the dockerfile and publishing the ports with the -p command.
Setting everything up manually without ssh -X
This works only with --net host. This approach works without xauth because we are directly piping to your unix domain socket on the local machine
ssh to host without -X
ssh -R6010:localhost:6010 user#host
start docker with -e DISPLAY=localhost:10.1 or export inside
in another terminal on local machine
socat -d -d TCP-LISTEN:6010,fork UNIX-CONNECT:/tmp/.X11-unix/X0
In original terminal run xclients
if container is net --bridged and you can't use docker ports, enable sshd on the container and use the jumphosts method

Cannot connect to Google Compute Engine instance via SSH in browser

I cannot connect to GCE via ssh. It is showing Connection Failed, and we are unable to connect VM on port 22.
And serial console output its shows
Jul 8 10:09:26 Instance sshd[10103]: error: Could not load host key: /etc/ssh/ssh_host_ed25519_key
Jul 8 10:09:27 Instance sshd[10103]: User username from 0.0.0.0 not allowed because not listed in AllowUsers
Jul 8 10:09:27 Instance sshd[10103]: input_userauth_request: invalid user username [preauth] Jul 8 10:09:27 Instance sshd[10103]: Connection closed by 0.0.0.0 [preauth] –
Yesterday it was working fine, but today it shows this error. I am new to GCE. Any suggestions?
UPDATE
I'd like to post this update to mention that on June 2016 a new feature is released where you can enable interactive access to the serial console so you can more easily troubleshoot instances that are not booting properly or that are otherwise inaccessible. See Interacting with the Serial Console for more information.
-----------------------------------------------------------------------------------
It looks like you've added AllowUsers in /etc/ssh/sshd_config configuration file.
To resolve this issue, you'll need to attach the boot disk of your VM instance to a healthy instance as the second disk. Mount it, edit the configuration file and fix the issue.
Here are the steps you can take to resolve the issue:
First of all, take a snapshot of your instance’s disk, in case if a loss or corruption happens you can recover your disk.
In the Developers Console, click on your instance. Uncheck Delete boot disk when instance is deleted and then delete the instance. The boot disk will remain under “Disks”, and now you can attach the disk to another instance. You can also do this step using gcloud command:
$ gcloud compute instances delete NAME --keep-disks all
Now attach the disk to a healthy instance as an additional disk. You can do this through the Developers Console or using the gcloud command:
$ gcloud compute instances attach-disk EXAMPLE-INSTANCE --disk DISK --zone ZONE
SSH into your healthy instance.
Determine where the secondary disk lives:
$ ls -l /dev/disk/by-id/google-*
Mount the disk:
$ sudo mkdir /mnt/tmp
$ sudo mount /dev/disk/by-id/google-persistent-disk-1-part1 /mnt/tmp
Where google-persistent-disk-1 is the name of the disk
Edit sshd_config configuration file and remove AllowUsers line and save it.
$ sudo nano /mnt/tmp/etc/ssh/sshd_config
Now unmout the disk:
$ sudo umount /mnt/tmp
Detach it from the VM instance. This can be done through the Developers Console or using the command below:
$ gcloud compute instances detach-disk EXAMPLE-INSTANCE --disk DISK
Now create a new instance using your fixed boot disk.

keep server running on EC2 instance after ssh is terminated

Currently, I have two servers running on an EC2 instance (MongoDB and bottlepy). Everything works when I SSHed to the instance and started those two servers. However, when I closed the SSH session (the instance is still running), I lost those two servers. Is there a way to keep the server running after logging out? I am using Bitvise Tunnelier on Windows 7.
The instance I am using is Ubuntu Server 12.04.3 LTS.
For those landing here from a google search, I would like to add tmux as another option. tmux is widely used for this purpose, and is preinstalled on new Ubuntu EC2 instances.
Managing a single session
Here is a great answer by Hamish Downer given to a similar question over at askubuntu.com:
I would use a terminal multiplexer - screen being the best known, and tmux being a more recent implementation of the idea. I use tmux, and would recommend you do to.
Basically tmux will run a terminal (or set of terminals) on a computer. If you run it on a remote server, you can disconnect from it without the terminal dying. Then when you login in again later you can reconnect, and see all the output you missed.
To start it the first time, just type
tmux
Then, when you want to disconnect, you do Ctrl+B, D (ie press Ctrl+B, then release both keys, and then press d)
When you login again, you can run
tmux attach
and you will reconnect to tmux and see all the output that happened. Note that if you accidentally lose the ssh connection (say your network goes down), tmux will still be running, though it may think it is still attached to a connection. You can tell tmux to detach from the last connection and attach to your new connection by running
tmux attach -d
In fact, you can use the -d option all the time. On servers, I have this in my >.bashrc
alias tt='tmux attach -d'
So when I login I can just type tt and reattach. You can go one step further >if you want and integrate the command into an alias for ssh. I run a mail client >inside tmux on a server, and I have a local alias:
alias maileo='ssh -t mail.example.org tmux attach -d'
This does ssh to the server and runs the command at the end - tmux attach -d The -t option ensures that a terminal is started - if a command is supplied then it is not run in a terminal by default. So now I can run maileo on a local command line and connect to the server, and the tmux session. When I disconnect from tmux, the ssh connection is also killed.
This shows how to use tmux for your specific use case, but tmux can do much more than this. This tmux tutorial will teach you a bit more, and there is plenty more out there.
Managing multiple sessions
This can be useful if you need to run several processes in the background simultaneously. To do this effectively, each session will be given a name.
Start (and connect to) a new named session:
tmux new-session -s session_name
Detach from any session as described above: Ctrl+B, D.
List all active sessions:
tmux list-sessions
Connect to a named session:
tmux attach-session -t session_name
To kill/stop a session, you have two options. One option is to enter the exit command while connected to the session you want to kill. Another option is by using the command:
tmux kill-session -t session_name
If you don't want to run some process as a service (or via an apache module) you can (like I do for using IRC) use gnome-screen Install screen http://hostmar.co/software-small.
screen keeps running on your server even if you close the connection - and thus every process you started within will keep running too.
It would be nice if you provided more info about your environment but assuming it's Ubuntu Linux you can start the services in the background or as daemons.
sudo service mongodb start
nohup python yourbottlepyapp.py &
(Use nohup if you want are in a ssh session and want to prevent it from closing file descriptors)
You can also run your bottle.py app using Apache mod_wsgi. (Running under the apache service) More info here: http://bottlepy.org/docs/dev/deployment.html
Hope this helps.
Addition: (your process still runs after you exit the ssh session)
Take this example time.py
import time
time.sleep(3600)
Then run:
$ python3 time.py &
[1] 3027
$ ps -Af | grep -v grep | grep time.py
ubuntu 3027 2986 0 18:50 pts/3 00:00:00 python3 time.py
$ exit
Then ssh back to the server
$ ps -Af | grep -v grep | grep time.py
ubuntu 3027 1 0 18:50 ? 00:00:00 python3 time.py
Process still running (notice with no tty)
You will want the started services to disconnect from the controlling terminal. I would suggest you use nohup to do that, e.g.
ssh my.server "/bin/sh -c nohup /path/to/service"
you may need to put an & in there (in the quotes) to run it in the background.
As others have commented, if you run proper init scripts to start/stop services (or ubuntu's service command), you should not see this issue.
If on Linux based instances putting the job in the background followed by disown seems to do the job.
$ ./script &
$ disown