sftp fails with 'message too long' error - ssh

My java program uses ssh/sftp for transferring files into linux machines (obviously...), and my library for doing so is JSch (though it's not to blame).
Now, some of these linux machines, have shell login startup scripts, which tragically causes the ssh/sftp connection to fail, with the following message:
Received message too long 1349281116
After briefly reading about it, it's clearly a known ssh design issue (not a bug - see here). And all suggested solutions are on ssh-server side (i.e. disable scripts which output messages during shell login).
My question - is there on option to avoid this issue on client side?

Check your .bashrc and .bash_profile on the server, remove anything that can echo. For now, comment the lines out.
Try again. You should not be seeing this message again.

put following into the top of file ~/.bashrc on username of id on remote machine
# If not running interactively, don't do anything just return early from .bashrc
[[ $- == *i* ]] || return
this will just exit early from the .bashrc instead of sourcing entire file which you do not want when performing a scp or sftp onto that target remote machine ... depending on shell of that remote username, make this edit on ~/.bashrc
or ~/.bash_profile or ~/.profile or similar

I got that error too, during and sftp get call in a bash script.
And according to the TO's error message, which was similar to mine, it looks like the -B option of the sftp command was set. Although a buffer size of 1349281116 bytes is "a bit" too high.
In my case I also did set the buffer size explicitly (with "good intentions"), which cause the same error message, followed by my set value.
Removing the forced value and letting sftp run with the default of 32K solved the problem to me.
-B buffer_size
Specify the size of the buffer that sftp uses when transferring
files. Larger buffers require fewer round trips at the cost of
higher memory consumption. The default is 32768 bytes.
In case it confirms to be the same issue, that whould suite as client side solution.

NOTE: I had to fix .bashrc output on the remote hosts, not the host that's issuing the scp or sftp command.

Here's a quick-n'-dirty solution, but it seems to work - also on binary files. All credits goes to uvgroovy.
Given file 'some-file.txt', just do:
cat some-file.txt | ssh root:1.1.1.1 /bin/bash -c "cat > /root/some-new-file.txt"
Still, if anyone know a sftp/scp built-in way to do so on client side, it'll be great.

Related

Nagios nrpe command fails but local command works

I'm using a custom script to check physical memory.
https://exchange.nagios.org/components/com_mtree/attachment.php?link_id=3329&cf_id=24
(i added the performance data)
Locally run with this:
/usr/lib64/nagios/plugins/check_custom_memory.sh
output:
OK - 30405 MB (96%) Free Memory | total=31513MB used=1108MB
When I run it from the nagios server with this command (hid actual IP for security reasons):
/usr/lib64/nagios/plugins/check_nrpe -t 60 -H xxx.xxx.xxx.xxx -c check_custom_memory.sh -a 10 5
output:
CRITICAL - 30405 MB (%) Free Memory | total=31513MB used=1108MB
It seems that the check_nrpe is excluding the % value. This happens only on this server and not my other servers. All other checks run fine. Any other nrpe check to the remote server works fine too. It seems to be just this one check. It makes me think it's the script but it works for other servers and locally, so i'm at a loss.
the /tmp/memcalc file has 666 permissions and owned by nrpe on the remote server, and I can see it's being written like it should when run locally. When running with check_nrpe, the file is not being accessed or written.
Any ideas why?
I believe I found the issue. Seems like it has something to do with selinux running. Normally we don't use it but this server has it running. It seems to stop access to writing to the file that's created in the /tmp directory to calculate the percentage of free memory.
As a result. I just rewrote the script to not use a temp file and to calculate the percentage using simple math and not being accurate (which is fine).

How does scp manages to handle Ctrl+C in sink mode

I'm curious about how does scp handles a situation when a binary file contains escape sequences - and, in particular, the Ctrl+C ("\0x03") character from the programmer's side of view.
I have already tried starting it in sink mode and sending it a "\0x03" character, but it clearly exited upon receiving it:
$ perl -e 'print "\x03"'|xsel
$ scp -t /tmp/somefile.txt
^C
$
However, transfering of a binary file that contains the same character doesn't fails, though I believe that it should.
I have also tried to read the scp.c:source function's source code to see if it attempts to perform any characters escape, but to my surprise it doesn't appears so.
The short answer is that the source scp instance communicates with the sink instance through a clean data pipe. Any byte values can be sent through the pipe, and no bytes receive any special treatment. This is the expected behavior for an ssh "shell" or "exec" channel with no PTY.
For the longer answer, I'm going to restrict this to OpenSSH scp running on unix systems. I assume that's the scenario that you're talking about.
The special behavior of keystrokes like Ctrl-C (interrupt the process) or Ctrl-D (end of file) is provided by the TTY interface. Programs like the ssh server use a unix feature called PTYs (pseudo-ttys) to provide a TTY interface for network clients. When you run a command like scp -t ... within an interactive session, you're running it in the context of a TTY (or PTY), and it's the TTY which would convert a typed Ctrl-C into an interrupt signal. Processes can disable this behavior, but the scp program doesn't do that, because it doesn't need to.
When you run scp in the normal way, like scp /some/local/file user#host:/some/remote/dir, the scp process that you launch runs a copy of ssh to make the connection to the remote system. The actual command that it runs will be something like this (simplified):
ssh user#localhost "scp -t /some/remote/dir"
In other words, it starts a copy of ssh which connects to the remote system and runs another copy of scp.
When ssh is run in this fashion, specifying a command to run on the remote system, it doesn't request a PTY for the channel by default. So the two scp instances will communicate with each other through a clean data pipe with no PTY involved.

Rundeck - reboot server job

I have a rundeck job that reboots a server, it sends the command "sudo reboot". This works and the server is rebooting.
The problem is that rundeck doesn't get a signal back so the job fails.
Is there a way to make this work and get a complete signal back in rundeck?
Perhaps wrap your command in a script, background the reboot operation, and return 0? I'm doing something similar with a set of development VMs, but I'm using virsh. I don't see why this couldn't be done with a physical server:
#!/bin/bash
ssh rundeck#yourserver sudo reboot &
exit 0
You may need to experiment a bit with the ssh options (perhaps '-f' and/or '-n') to get this to work properly.
Well playing around now I just used as Local Command step:
ssh ${node.username}#${node.hostname} "reboot & exit"
The return code is ZERO and everybody is happy.

smbclient NT_STATUS_ACCESS_DENIED

About once every 10 years I need to wrestle with SAMBA as I migrate to new hosts, and then I repress the traumatic memory until I have to relearn it all the next time :S Hence this newbyish question.
I have a Ubuntu VM with a couple of shares - one ("Public") is unsecured, the other ("Public2") is secured, with the intention that it should be accessed only by an authenticated user account defined on the Ubuntu box. Both shares appear in Windows Explorer on both XP and Win8.1. However, I can't for the life of me work out how to log into the secure Public2 share.
Leaving Windows clients out of it, I've tried simply looping back to the box using smbclient, which produces the following output, indicating it just can't authenticate:
michael#ubuntu:~$ smbclient //ubuntu/Public2 --user=michael%mypasswd
Domain=[WORKGROUP] OS=[Unix] Server=[Samba 4.1.6-Ubuntu]
tree connect failed: NT_STATUS_ACCESS_DENIED
Meanwhile the unsecured share is accessible.
What (probably incredibly obvious) thing have I missed? Am I not specifying the username correctly?
/var/lib/samba/usershares/public (unsecure, works) contains:
#VERSION 2
path=/home/michael/Public
comment=
usershare_acl=S-1-1-0:F
guest_ok=y
sharename=Public
/var/lib/samba/usershares/public2 (which I can't access) contains:
#VERSION 2
path=/home/michael/Public2
comment=
usershare_acl=S-1-1-0:F
guest_ok=n
sharename=Public2
For users who are using for the command line option, use
$ sudo smbpasswd -a <user_name>
this will prompt you to assign the password.
WARNING: This refers to Samba 2. We are at Samba 4 now. Take care which version of Samba you are using. As stated in my comment, the GUI will break your configurations.
A work colleague has pointed me in the right direction:
The Linux user ID being used to access the Linux share needs to have a second "samba" password defined for it. The easiest way to do this is to install and run the GUI Samba Server Configuration app, which isn't installed by default.
The Samba documentation does explain this, but it's buried in the masses of documentation explaining all the various arcane aspects of samba.conf configuration etc.
The following article gets to the heart of the subject:
http://ubuntuhandbook.org/index.php/2014/05/ubuntu1404-file-sharing-samba/
You have to edit the '/etc/samba/smb.conf'
use sudo nano /etc/samba/smb.conf to edit the conf file.
Where Workgroup = [your Domain]
There is no 'second samba password'. There is linux password: /etc/passwd and then there is Samba password, which is either smbpasswd or passdb.tdb. Which one and where it is located depends on Samba version and setting in smb.conf. BOTH must be set. Both means Linux in /etc/passwd and in Samba (one of the above). This is in most cases the issue with this error message. Or try to restart Lanman service, or Windows.
But I want to comment on another, probably rarer case.
If you are using customized Samba and only in such case, there might be another (extended) reason for this error.
Samba might be compiled with additional permission checks, which will say "NO" (return false) after which Samba will announce error, the same as this Q is mentioning.
Check the log for errors. There might be a clue if it is such a case.
Again, this is specific for custom build Samba.
Specifically in my case, on QNAP NAS, Samba will call a binary /sbin/appriv -C -u 502 -S1
-C, --check Check user privilege.
-S, --samba [bit] The privilege of Samba
-u, --uid [uid] UID.
appriv is "appriv -> nasutil" which is QNAP own binary, not part of the linux or the GNU.
With so many options build in Samba, I can't find a reasoning for this additional check.
Especially when it could be satisfied with just a plain empty file returning "true".
Just a complication, possible source of issues, no safety advancement.
I've been updating old abandoned system from QNAP. Replaced Samba from another, newer NAS.
This is how I come about this issue and wasted a lot of time on it. Thanks QNAP.
Apparmor might also be the cause. You need to whitelist all share locations, otherwise you will always get the "permission denied" error.
Fix is adding to /etc/apparmor.d/local/usr.sbin.smbd:
"/path_to_share/" rk,
"/path_to_share/**" lrwk,
for each share. (The first line allows read-access to the base-directory, the second line allows read-write-access to everything within that base-directory recursively)
Source: https://wiki.archlinux.org/title/Samba#Permission_issues_on_AppArmor
Crosspost from: https://serverfault.com/a/1109267/592032

Only one PuTTY session working at a time?

I use PuTTY sessions to talk to an embedded device running QNX 6.4.1 using SSH over TCP/IP.
Today, one of my systems mysteriously won't allow me to have more than one PuTTY session open at a time. If I try to start a second session, I can authenticate with user name and password, but the sign on banner prints out with an extra blank line between each line and messes with my ability to hit enter. I can do nothing that looks remotely valid except Control-C or close the PuTTY window.
I suspected the text file that contains the banner had bad line
endings, but it does not.
I suspected terminal setting issues, but if I have one session open it
works. With no changes to settings, just trying to open a second session it
does not.
I wondered if the .profile was getting mangled, but that doesn't
seem to be the case either.
Now I'm down to "perhaps ssh is messed up and rebooting would fix
it?" But I am hesitant to reboot it because if we lose TCP/IP
connection to it, it's several hours worth of work (physical labor)
to restore.
Any thoughts about what is going wrong and how I can fix it?
I'm connecting using PuTTY 0.62 from 64-bit Windows 7 to QNX 6.4.1. The openssh/openssl version is modern.
UPDATE
The issue came back a few days later. Using Guntram Blohm's suggestion below, I was able to at least get past the "Press enter once you've read the banner" screen. I then ran stty sane ctrl-j as he recommended. Here is the output of stty:
Bad after I had run stty sane ctrl-j (And hand reformatted it to be readable)
Name: /dev/ttyp1
Type: pseudo
Opens: 3
+raw +echo
+osflow
intr=^C quit=^\ erase=^? kill=^U eof=^D start=^Q stop=^S susp=^Z
lnext=^V min=01 time=00 pr1=^[ pr2=5B left=44 right=43 up=41
down=42 ins=40 del=50 home=48 end=59
I then opened another PuTTY session immediately after this and it worked properly. This is confusing me how it works sometimes and doesn't work others. How can that happen? What is different?
Good
Name: /dev/ttyp2
Type: pseudo
Opens: 2
+edit
+osflow
intr=^C quit=^\ erase=^? kill=^U eof=^D start=^Q stop=^S susp=^Z
lnext=^V min=01 time=00 pr1=^[ pr2=5B left=44 right=43 up=41
down=42 ins=40 del=50 home=48 end=59
So right now I have a good PuTTY terminal open, and a bad one. What else can I do to isolate this issue?
It's probably another process that uses the pseudo-terminal, puts it in a special state, then crashes without restoring the state. vi comes to mind, or maybe a file upload/download program. These programs change the terminal mode to read each character indicidually, instead of line by line, and tweak a few other things as well. Normally, logging out/back in SHOULD fix that, but i'm not sure QNX handles it correctly.
One thing you could do to copy the parameters of a working terminal to the messed up one is stty -g on the good one, then paste that output to the command line of the bad one. Like this (on Linux, i don't have a QNX at the moment):
(on the good terminal)
gbl#bermuda$ stty -g
500:5:bf:8a3b:3:1c:7f:15:4:0:1:0:11:13:1a:0:12:f:17:16:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0
(on the bad one)
gbl#bermuda$ stty 500:5:bf:8a3b:3:1c:7f:15:4:0:1:0:11:13:1a:0:12:f:17:16:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0:0
These terminal modes are kept per pseudo tty device, that's why your /dev/ttyp1 can be messed up, while the /dev/ttyp2 that's allocated for the next ssh connection is ok.