I have the following scenario:
In my work computer (A) I open a byobu (tmux) session.
Inside byobu, I open several terminals. Some of them are local to (A), but in others I ssh to a different computer (B).
I go home, and from my my home computer (C) I ssh to (A), run "byobu" and find all my sessions in (A) or (B).
This works perfectly, except for running X11 applications. I don't leave any X11 application running when I change computers, but just running "xclock" sometimes works and sometimes doesn't ("cannot connect to X server localhost:n.0").
I understand this depends on the DISPLAY variable, and that it would be set up such that X11 would connect to the computer where I ran "byobu" last before creating the session inside byobu, and that could be (A) or (C). My problem is that often I don't know how to fix a session that's not working any more. Sometimes I can just open another session (another tab in byobu) and use the value of $DISPLAY in other sessions, but that only works as long as the new session is open, and not always. In other cases I've been able to detach byobu (F6), re-attach it (run "byobu") and open a new ssh connection to (B), and then that one works, but not the already existing sessions.
I have read some documents like SSH, X11 Forwarding, and Terminal Multiplexers or How to get tmux X display to come back?, but it is unclear to me how they apply (if they do) to my situation. For instance, the .bashrc code of the former, should it be in (A), (B), or (C)?
UPDATE/EDIT I have found the correct way to do this. Simply type this in any of the byobu shells
. byobu-reconnect-sockets
and the DISPLAY environment variable for your new ssh connection, as well as SSH_AUTH_SOCK and several others that may be useful and dependent on the primary login shell (that which you do byobu attach-session -t session_name or for screen backend, byobu -D -R session_name or however you do prefer to do it).
This is all supposed to happen simply by pressing CTRL-F5, but I suspect that like me, your computer is intercepting the CTRL-F5 (for me, I am using iTerm on a Mac) and either doing its own thing with it, or sending the wrong control character sequence so byobu doesn't receive it properly. It's a bit more typing, but sourcing the shell script as indicated above will do the same thing as CTRL-F5 is supposed to do, and will do it for ALL byobu open shells in the session. The rest of my original answer below you can probably now ignore, but I'll leave it there in case it is in someway useful to someone perhaps for some other purpose.
Also, you can edit the byobu-reconnect-sockets script (it is just a shell script) and there are places to add additional environment variables you want updated, so really none of the below is necessary.
(original answer follows)
When you ssh in again and reattach your byobu sessions, it is likely that the ssh forwarded X11 display for your new ssh connection is not the same as the proxy display that initial ssh session created when you launched byobu. Suppose you ssh in for the first time and are going to start a new byobu session with many shells and perhaps many forwarded X11 windows, and this will all work fine, because that first ssh shell sets the DISPLAY environment variable to what it is listening on for X11 connections. This might be something like
[~/]$ printenv DISPLAY
localhost:11.0
all shells started by byobu (and tmux or screen on the backend) are going to inherit the setting of all the environment variables that were set when byobu was initially launched, ie, the X11 display that was forwarded for your user for your first ssh connection.
Then you detach your byobu session and go home, and ssh back in. But this time you get a different X11 display, because some other user has localhost:11.0. In your new ssh session that you started at home, the value of DISPLAY might be localhost:14.0 (say). For X11 forwarding through this ssh connection, X11 clients need to connect to the ssh X11 proxy at display localhost:14.0, not localhost:11.0. You will likely not have the authorization keys for localhost:11.0 at that point, someone else will, or worse, if they have disabled X authentication, the X11 windows you are trying to open are going to start showing up on their screen. All you need to do to make it work, is this -
detach byobu
you should now be in the current ssh shell. Do printenv DISPLAY and note the value shown (or copy it)
reattach byobu
In any shell you want to use X11 in, do export DISPLAY=localhost:14.0 (in this example it'd be that value, you'll use whatever value you get for #2 in your case)
X11 will now forward through ssh to your screen as you expect
The catch - you have to do this in every byobu shell that is open separately if you want to use X in that shell. To my knowledge there is no way to set it in all shells except I think there may be a way to run any arbitrary command in all shells at the same time, but I don't know the key sequence to do that off the top of my head.
the annoying - you have to do this every time you detach and disconnect your ssh connection, and then reconnect with ssh and reattach your byobu, as it is likely the DISPLAY environment variable in the ssh shell has changed, but your shells either have what was set for DISPLAY when byobu was initially started, or whatever you have last set it to.
Even if you open new shells in byobu in some later ssh connection, those shells will still inherit the DISPLAY environment variable setting that was set when byobu was first started, all the way back to your first ssh connection. You have to do this with new shells too.
This annoys me constantly and I'd love to take the time to develop some hack of some kind to at least make this less tedious, and best of all would be to have it done along with ctrl-F5, which effectively does exactly all this, but for some other things you often want to reconnect with your new ssh session, especially SSH_AUTH_SOCK for ssh-agent.
I have a problem. I created a pool consisting of single volume of 1 file 2.5Tb just to fight with file duplicates. I copied a folder with photos. Some of the photos were not backed up. Just now I see my pool folder is empty. When I checked with 'sudo zfs list' it said 'No datasets available'.
I thought it was detached and to attach I started again all these commands.
sudo zpool create singlepool -f /home/john/zfsvolumes/zfs_single_volume.dat -m /home/share/zfssinglepool
sudo zfs set dedup=on singlepool
sudo zpool get dedupratio singlepool
sudo zfs set compression=lz4 singlepool
sudo chown -R writer:writer /home/share/zfssinglepool
I see now empty pool!
May I get my folders back which I copied to the pool before I started create pool again?
Unfortunately, use of zpool create -f will recreate the pool from scratch even if ZFS recognizes that a pool has already been created using that storage:
-f Forces use of vdevs, even if they appear in use or specify a
conflicting replication level. Not all devices can be over-
ridden in this manner.
This is similar to reformatting a partition with other file systems, which will leave whatever data is there written in place, but still erase the references the file system needs to find the data. You may be able to pay an expert to reconstruct your data, but otherwise I'm afraid the data will be very hard to get back from your pool. As in any data recovery mission, I'd advise making a copy of the data ASAP on some external media that you can use to do the recovery from, in case further attempts at recovery accidentally corrupt the data even worse.
Could you please point me what is the difference between installing openssh-server and starting a ssh session with a given docker container and running docker run -t -i ubuntu /bin/bash and then performing some operations. How does docker attach compare to those two methods?
Difference 1. If you want to use ssh, you need to have ssh installed on the Docker image and running on your container. You might not want to because of extra load or from a security perspective. One way to go is to keep your images as small as possible - avoids bugs like heartbleed ;). Whether you want ssh is a point of discussion, but mostly personal taste. I would say only use it for debugging, and not to actually change your image. If you would need the latter, you'd better make a new and better image. Personally, I have yet to install my first ssh server on a Docker image.
Difference 2. Using ssh you can start your container as specified by the CMD and maybe ENTRYPOINT in your Dockerfile. Ssh then allows you to inspect that container and run commands for whatever use case you might need. On the other hand, if you start your container with the bash command, you effectively overwrite your Dockerfile CMD. If you then want to test that CMD, you can still run it manually (probably as a background process). When debugging my images, I do that all the time. This is from a development point of view.
Difference 3. An extension of the 2nd, but from a different point of view. In production, ssh will always allow you to check out your running container. Docker has other options useful in this respect, like docker cp, docker logs and indeed docker attach.
According to the docs "The attach command will allow you to view or interact with any running container, detached (-d) or interactive (-i). You can attach to the same container at the same time - screen sharing style, or quickly view the progress of your daemonized process." However, I am having trouble in actually using this in a useful manner. Maybe someone who uses it could elaborate in that?
Those are the only essential differences. There is no difference for image layers, committing or anything like that.
So I have been working on this for some time. Would like to know if there is a better way or if I am on the right track.
I would basically like to allow some users to login to my server via SSH and then have a squid tunnel via that SSH connection.
The tricky part however is that I dont want these users to be able to execute ANY commands. I mean NOTHING at all.
So at this stage I have setup a Jail via - jailkit. The specific user is then placed in the jail and given the bash shell as a shell.
The next step would be to remove all the commands in the /jail/bin/ directories etc so that they are not able to execute any commands.
Am I on the right path here? What would you suggest?
Also...I see that it will give them many command not found errors...how do I remove these.
Is there any other shell I could look at giving them that would not let them do anything?
You could set their shell to something like /bin/true, or maybe a simple script that will output an informational message, and then have them logon using ssh -N (see the ssh manual page). I believe that allows them to use portforwarding without having an actuall shell on the system.
EDIT:
The equivalent of ssh -N in PuTTY is checking the "Don't start a shell or command at all" checkbox in its SSH configuration tab (Connection->SSH).
EDIT2:
As an alternative to this you could use a script that enters an infinite sleep loop. Until it is interrupted using Ctrl-C the connection will remain alive. I just tried this:
#!/bin/sh
echo "DNSH: Do-Nothing Shell"
while sleep 3600; do :; done
If you use this as a shell (preferrably with a more helpful message) your users will be able to use port-forwarding without an actual shell and without having to know about ssh -N and friends.
I have an ssh script that uses a local key for login to the remote host - nothing too exciting there. The key is passworded and I usually add it to an agent to avoid prompting.
Occasionally I run the program before the agent is running and it will hang waiting for the unlock phrase. In such cases, rather than prompt interactively, I want the command to simply fail.
Anyone know if there's an option for this?
Sure is.
ssh REMOTE_HOST -o "BatchMode yes"