Write failed : broken pipe - ssh

I have a headless Ubuntu server. I ran a command on the server (snapraid sync) over SSH from my Mac. The command said it would take about 6 hrs, so I left it over night.
When I came down this morning, the Terminal on the Mac said: "Write failed: broken pipe"
I'm not sure if the command executed fully. Is this a timeout issue? If so, how can I keep the SSH connection alive overnight?

This should resolve the problem for Mac osX version: 10.8.2
add:
ServerAliveInterval 120
TCPKeepAlive no
to this file:
~/.ssh/config
Or, if you want it to be a global change in the SSH client, to this file
/private/etc/ssh_config
"ServerAliveInterval 120" basically says to "ping" the server with a NULL packet every 120s, and "TCPKeepAlive no" means to not set the SO_KEEPALIVE socket option (since you shouldn't need it with ServerAliveInterval already set, and apparently it's "spoofable" or some odd).
The servers similarly have something they could set for the same effect (ClientKeepAliveInterval) but typically you don't have control over those settings as much.

You can use "screen" util for that. Just connect to the server over SSH, start screen session by "screen" command execution, start your command there and disconnect (don't exit screen session). When you think your command already done you can connect to the server and attach to your screen session where you can see the command execution result/progress (in case one should be).
See "man screen" for more details.

This should resolve the problem for ubuntu and linux mint
add:
ServerAliveInterval 120
TCPKeepAlive yes
to
/etc/ssh/ssh_config file

Instead of screen I'd recommend tmux, an (arguably) better competitor to screen
tmux new-session -s {name}
That command creates a session. Any time after that you want to connect:
tmux a -t {name}

there are two solutions
To update server and restart server sshd
echo "ClientAliveInterval 60" | sudo tee -a /etc/ssh/sshd_config
To update client
echo "ServerAliveInterval 60" >> ~/.ssh/config

After having tried to change many of above parameters in sshd_config (ClientAliveInterval, ClientMaxCount,TCPKeepAlive...) nothing had changed. I have spend hours and days to look for a solution on forums and blogs...
It appears that the problem of broken pipe which forbids to connect with ssh/sftp came from permissions settings on ChrootDirectory.
the ChrootDirectory has to be owned by root/root with 755 permision
lower permissions 765/766/775... won't work but strongers do (eg 700)
if you need to give a write permission to connected user, you can give it in sub-directories.
if chroot is owned by sftpUser:sftpGroup, it won't work neither...
chroot-> root:root 755
|
---subdirectories-> sftpUser:sftpGroup 700 up to 770
hope it would help

If you're still having problem after editing /etc/ssh/sshd_config or if ~/.ssh/config
simply does not exist on your machine then I highly recommend reinstalling ssh. This solution took about a minute to fig both "Broken pipe" errors and "closed by remote host" errors.
sudo apt-get purge openssh-server
sudo apt update
sudo apt install openssh-server

jeremyforan's answer is correct, however I've found that if you are trying to use scp it is necessary to explicitly point it to a config file configured as described, it seems to not obey the normal hierarchy of config. For example:
scp -F ~/.ssh/config myfile joe#myserver.com:~
works, while omitting the -F still results in the broken pipe error.

Ubuntu :
ssh -o ServerAliveInterval=5 -o ServerAliveCountMax=1 user#x.x.x.x

I use an ASUS router with two internet input lines. I appoint my IP to a certain line, and it works.

Related

ssh and sudo: pam_unix(sudo:auth): conversation failed, auth could not identify password for [username]

I'm facing a weird behavior trying to run rsync as sudo through ssh with passwordless login.
This is something I do with dozens of servers, I'm having this frustrating problem connecting to a couple of Ubuntu 18.04.4 servers
PREMISE
the passwordless SSH from CLIENT to SERVER with account USER works
nicely
When I'm logged in SERVER I can sudo everything with
account USER
On SERVER I've added the following to /etc/sudoers
user ALL=NOPASSWD:/usr/bin/rsync
Now, if I launch this simple test from machine CLIENT as user USER, I receive the following sudo error message:
$ ssh utente#192.168.200.135 -p 2310 sudo rsync
sudo: no tty present and no askpass program specified
Moreover, looking in the SERVER's /var/log/auth.log I found this errors:
sudo: pam_unix(sudo:auth): conversation failed
sudo: pam_unix(sudo:auth): auth could not identify password for [user]
am not an PAM expert, but tested the following solution working on Ubuntu 16.04.5 and 20.04.1
NOTE : Configuration set to default on /etc/ssh/sshd_config
$ sudo visudo -f /etc/sudoers.d/my_config_file
add the below lines
my_username ALL=(ALL) NOPASSWD:ALL
and don't forget to restart sshd
$ sudo systemctl restart sshd
I've found a solution thanks to Centos. Infact, because of the more complex configuration of /etc/sudoers in Centos (compared to Ubuntu or Debian), I've been forced to put my additional configurations to an external file in /etc/sudoers.d/ instead than putting it directly into /etc/sudoers
SOLUTION:
Putting additional configurations directly into /etc/sudoers wouldn't work
Putting the needed additional settings in a file within the directory /etc/sudoers.d/ will work
e.g. , these are the config lines put in a file named /etc/sudoers.d/my_config_file:
Host_Alias MYSERVERHOST=192.168.1.135,localhost
# User that will execute Rsync with Sudo from a remote client
rsyncuser MYSERVERHOST=NOPASSWD:/usr/bin/rsync
Why /etc/sudoers didn't work? It's unknown to me even after two days worth of Internet search. I find this very obscure and awful.
What follows is a quote from this useful article: https://askubuntu.com/a/931207
Unlike /etc/sudoers, the contents of /etc/sudoers.d survive system upgrades, so it's preferrable to create a file there than to modify /etc/sudoers.
For the editing of any configuration file to be used by sudo the command visudo is preferable.
i.e.
$ sudo visudo -f /etc/sudoers.d/my_config_file
I had a similar problem on a custom linux server, but the solution was similar to the answers above.
As soon as I removed the line your_user ALL=(ALL) NOPASSWD:ALL from /etc/sudoers, the errors were gone.

Running Sudo inside SSH with << heredoc

I' m sure you will find the question similar to many other posts on stackoverflow or on internet. However, I could not find the solution to my problem precisely. I have list of task to be run on remote server, and passing the script is OK! however does not suit to the requirement.
I' m running following from my server to connect to remote server;
ssh -t user#server << 'HERE'
sudo su - <diff_user>
do task as diff_user
HERE
ssh -tt user#server << 'HERE'
sudo su - <diff_user>
do task as diff_user
HERE
With first option (-t), I' m still not able to do sudo, it says below;
sudo: sorry, you must have a tty to run sudo
With second option above (-tt), I' m getting reverse input/output to current server session, total mess.
I tried passing the content as an script to SSH to run on remote host, however, getting similar results.
Is there a way other than commenting out below?
Defaults requiretty in /etc/sudoers file
I have not tried above though, I know RedHat approved it to be removed/ commented out in future version, whenever that is. If I go with step, I will have get above done in 100's of VM's (moreover, I dont have permission to edit the file on VM's and give it a try).
Bug 1020147
Hence, my issue remains the same, as before. It would be great if I can get some input from experts here :)
Addition Info : Using RedHat RHEL 6, 2.6.32-573.3.1
I do have access to the remote host and once I' m in, my ID does not require password to switch to diff_user.
When you are asking this way, I guess you don't have passwordless sudo.
You can't communicate with the remote process (sudo), when you put the script on stdin.
You should rather use the ssh and su command:
ssh -t user#server "sudo su - <diff_user> -c do task as diff_user"
but it might not work. Interactive session can be initiated using expect (a lot of questions around here).
I was trying to connect to another machine in an automated fashion and check some logs only accessible to root/sudo.
This was done by passing the password, server, user, etc. in a file — I know this is not safe and neither a good practice, but this is the way it will be done in my company.
I have several problems:
tcgetattr: Inappropriate ioctl for device;
tty related problems that I don't remember exactly;
sudo: sorry, you must have a tty to run sudo, etc..
Here is the code that worked for me:
#!/bin/bash
function checkLog(){
FILE=$1
readarray -t LINES < "$FILE"
machine=${LINES[4]}
user=${LINES[5]}
password=${LINES[6]}
fileName=${LINES[7]}
numberOfLines=${LINES[8]}
IFS='' read -r -d '' SSH_COMMAND <<EOT
sudo -S <<< '$password' tail $fileName -n $numberOfLines
EOT
RESULTS=$(sshpass -p $password ssh -tt $user#$machine "${SSH_COMMAND}")
echo "$RESULTS"
}
checkLog $1

Run ssh on Apache -> Failed to get a pseudo terminal: Permission denied

I'm using flask with apache(mod_wsgi).
When I use ssh module with external command subprocess.call("ssh ......",shell=True)
(My Python Flask code : Not wrong)
ssh = "sshpass -p \""+password+"\" ssh -p 6001 "+username+"#"+servername+" \"mkdir ~/MY_SERVER\""
subprocess.call(ssh, shell=True)
I got this error on Apache error_log : Failed to get a pseudo terminal: Permission denied
How can I fix this?
I've had this problem under RHEL 7. It's due to SELinux blocking apache user to access pty. To solve:
Disable or set SELinux as permissive (check your security needs): edit /etc/selinux/config and reboot.
Allow apache to control its directory for storing SSH keys:
sudo -u apache
chown apache /etc/share/httpd
ssh to desired host, accept key.
I think apache's login shell is "/sbin/nologin".
If you want to allow apache to use shell command, modify /etc/passwd and change the login shell to another shell like "/bin/bash".
However, this method is vulnerable to security. Many python ssh modules are available in internet. Use one of them.
What you are doing seems frightfully insecure. If you cannot use a Python library for your SSH connections, then you should at least plug the hole that is shell=True. There is very little here which is done by the shell anyway; doing it in Python affords you more control, and removes a big number of moving parts.
subprocess.call(['/usr/bin/sshpass', '-p', password,
'/usr/bin/ssh', '-T', '-p', '6001', '{0}#{1}'.format(username, servername),
'mkdir ~/MY_SERVER'])
If you cannot hard-code the paths to sshpass and ssh, you should at least make sure you have a limited, controlled PATH variable in your environment before doing any of this.
The fix for Failed to get a pseudo-terminal is usually to add a -T flag to the ssh command line. I did that above. If your real code actually requires a tty (which mkdir obviously does not), perhaps experiment with -t instead, and/or redirecting standard input and standard output.

How can I force `vagrant ssh` to do pseudo-tty allocation?

The first thing I do after vagrant ssh is usually attaching to a tmux session.
I want to automate this, so I try: vagrant ssh -c "tmux attach", but it fails and says "not a terminal".
After some googling I find this article and know that I should force a pseudo-tty allocation before executing a screen-based program, and it can be done with the -t option of ssh.
But I don't know how to use this option with vagrant ssh.
According to this documentation, you should try adding -- to the command.
As I have not used vagrant, I am unsure of the formatting, but assume it would be similar to:
vagrant ssh -- -t
Unless, you need to include the username and host, in which case add the username and host.

Cygrunsrv & autossh : A way to embedd remote commands in the command line?

I'm using cygrunsrv and autossh on windows XP to create a service building a tunnel to a remote server but i also want to create another tunnel from the remote server to another server.
I can achieve it with this command line :
autossh -M 5432 serverA -t 'autossh -M 4321 serverB -N'
but when I want to set it up in cygwin through cygrunsrv to make it works as a service :
cygrunsrv -I TUNNEL -p /usr/bin/autossh -a "-M 5432 serverA -t 'autossh -M 4321 serverB -N'" -e AUTOSSH_NTSERVICE=yes -e AUTOSSH_POLL=20 -e AUTOSSH_GATETIME=30
It's not fully working. The service is creating the tunnel correctly to ServerA but it's not sending the autossh command "autossh -M 4321 serverB -N" to ServerA.
I tried to escape the quote but all my efforts didn't make any difference and I'm not seeing any command sent in the autossh logs.
I think the problem is related to pseudo terminal that is not created through the cygrunsrv.
I'd like to know if there's a way to fix my cygrunsrv command line to make it work or should I consider a different approach ?
Lionel, try removing the AUTOSSH_NTSERVICE=yes from the cygrunsrv invocation. As /usr/share/doc/autossh/README.Cygwin explains:
Setting AUTOSSH_NTSERVICE=yes in the calling environment ...
change[s] autossh's behavior in three useful
ways:
(1) Add an -N flag to each invocation of ssh, thus disabling shell
access. The idea is that if you're running autossh as a system
service, you're using it to forward ports; it wouldn't make sense to
run a shell session as a system service. (If you think this reasoning
is wrong, please send a bug report to the author or Cygwin maintainer,
and tell us what you're trying to do.)
Despite what the above says, it seems that you may have a good reason for not wanting -N (which suppresses command execution) in your service's ssh invocation. Removing AUTOSSH_NTSERVICE=yes should take care of it. It will have a couple of other minor disadvantages, but you can probably live with it. Read the rest of README.Cygwin for the details.