How does scp manages to handle Ctrl+C in sink mode - ssh

I'm curious about how does scp handles a situation when a binary file contains escape sequences - and, in particular, the Ctrl+C ("\0x03") character from the programmer's side of view.
I have already tried starting it in sink mode and sending it a "\0x03" character, but it clearly exited upon receiving it:
$ perl -e 'print "\x03"'|xsel
$ scp -t /tmp/somefile.txt
^C
$
However, transfering of a binary file that contains the same character doesn't fails, though I believe that it should.
I have also tried to read the scp.c:source function's source code to see if it attempts to perform any characters escape, but to my surprise it doesn't appears so.

The short answer is that the source scp instance communicates with the sink instance through a clean data pipe. Any byte values can be sent through the pipe, and no bytes receive any special treatment. This is the expected behavior for an ssh "shell" or "exec" channel with no PTY.
For the longer answer, I'm going to restrict this to OpenSSH scp running on unix systems. I assume that's the scenario that you're talking about.
The special behavior of keystrokes like Ctrl-C (interrupt the process) or Ctrl-D (end of file) is provided by the TTY interface. Programs like the ssh server use a unix feature called PTYs (pseudo-ttys) to provide a TTY interface for network clients. When you run a command like scp -t ... within an interactive session, you're running it in the context of a TTY (or PTY), and it's the TTY which would convert a typed Ctrl-C into an interrupt signal. Processes can disable this behavior, but the scp program doesn't do that, because it doesn't need to.
When you run scp in the normal way, like scp /some/local/file user#host:/some/remote/dir, the scp process that you launch runs a copy of ssh to make the connection to the remote system. The actual command that it runs will be something like this (simplified):
ssh user#localhost "scp -t /some/remote/dir"
In other words, it starts a copy of ssh which connects to the remote system and runs another copy of scp.
When ssh is run in this fashion, specifying a command to run on the remote system, it doesn't request a PTY for the channel by default. So the two scp instances will communicate with each other through a clean data pipe with no PTY involved.

Related

How does running ssh with a command string work behind the scene?

I have some difficulities on understanding how ssh works behind the scene when I run it with a command string.
Normally, I type ssh rick#1.2.3.4 then I am logged into the remote machine and run some commands. If I don't use nohup or disown, once I close the session, all running processes started by ssh will stop. That's the ordinary case.
But if I run ssh with a command string, things become different. The process started by ssh won't stop if I close the session.
For example:
Run from local command line: ssh rick#1.2.3.4 "while true; do echo \"123test\" >> sshtest.txt"; done
Run a remote script ssh rick#1.2.3.4 './remoteScript_whichDoTheSameAsAbove.sh'
After closing the session by Ctrl + C or kill pid on the local machine, on the remote machine I see the process still running with ps -ef .
So my question is, could someone make a short introduction on how ssh works when I run it with a command string like above?
Also, I get very confused when I see these 2 related questions during searching:
Q1: Getting ssh to execute a command in the background on target machine . What is this question asking for? Isn't ssh rick#1.2.3.4 'some command' already run as a seperate shell/pts? I don't understand what "in the background" is for.
Q2: Keep processes running after SSH session disconnects Isn't simply running a remote script meets his requirement? ssh rick#1.2.3.4 "./remoteScript.sh. I get very confused when I see so many "magic" answers under that question.

AWS process launched from SSH terminate when SSH hangs up

I use SSH to connect to my AWS EC2 instances and run code that takes a long time to complete. I find that if my local computer sleeps (or even if I leave it unattended for a bit) the SSH connection hangs up (which is not fatal in itself) but this seems to terminate the code on the EC2 instance that I launched using SSH.
Also, I use SSH to locally monitor the exception of my remote code, so even if there's a way to tell the remote process to stay alive after SSH has gone, I still need a way to locally see the output of the process as it continues to run (without SSH).
How do I keep code running on my AWS EC2 instance after SSH has hung up; how can I monitor the output of such a process?
When you close your tty (ssh close in your case) your process gets a SIGHUP and the default action on SIGHUP is to terminate. To avoid that you can use the command nohup to trap and not send the SIGHUP to your command, or trap the SIGHUP in your code and ignore it.
There are a bunch of ways to track a background process, but perhaps the easiest is to have it write to a file and in that other ssh you can read that file. If your process is really a command on the command line you can redirect its standard output and standard error to a file. When such a file keeps getting new content, it may be annoying to keep reading it to refresh, in which case the command "tail -f" handy.
Here is how you can config your ssh connection to stay alive :
vi ~/.ssh/config # on your client side
add this line to engage sending a "null packet" every 120 seconds :
ServerAliveInterval 120
If you own the server side do a similar change :
vi /etc/ssh/sshd_config
add these lines at bottom of config file
ClientAliveInterval 120
ClientAliveCountMax 720
this is for linux YMMV on other OS settings
Use screen
local> ssh ...
remote> screen
remote+screen> python long_running.py ...
You can then detach from screen and even disconnect from SSH, and when you return by SSHing back in again, you can
remote> screen -r
to reconnect to your running code.

sftp fails with 'message too long' error

My java program uses ssh/sftp for transferring files into linux machines (obviously...), and my library for doing so is JSch (though it's not to blame).
Now, some of these linux machines, have shell login startup scripts, which tragically causes the ssh/sftp connection to fail, with the following message:
Received message too long 1349281116
After briefly reading about it, it's clearly a known ssh design issue (not a bug - see here). And all suggested solutions are on ssh-server side (i.e. disable scripts which output messages during shell login).
My question - is there on option to avoid this issue on client side?
Check your .bashrc and .bash_profile on the server, remove anything that can echo. For now, comment the lines out.
Try again. You should not be seeing this message again.
put following into the top of file ~/.bashrc on username of id on remote machine
# If not running interactively, don't do anything just return early from .bashrc
[[ $- == *i* ]] || return
this will just exit early from the .bashrc instead of sourcing entire file which you do not want when performing a scp or sftp onto that target remote machine ... depending on shell of that remote username, make this edit on ~/.bashrc
or ~/.bash_profile or ~/.profile or similar
I got that error too, during and sftp get call in a bash script.
And according to the TO's error message, which was similar to mine, it looks like the -B option of the sftp command was set. Although a buffer size of 1349281116 bytes is "a bit" too high.
In my case I also did set the buffer size explicitly (with "good intentions"), which cause the same error message, followed by my set value.
Removing the forced value and letting sftp run with the default of 32K solved the problem to me.
-B buffer_size
Specify the size of the buffer that sftp uses when transferring
files. Larger buffers require fewer round trips at the cost of
higher memory consumption. The default is 32768 bytes.
In case it confirms to be the same issue, that whould suite as client side solution.
NOTE: I had to fix .bashrc output on the remote hosts, not the host that's issuing the scp or sftp command.
Here's a quick-n'-dirty solution, but it seems to work - also on binary files. All credits goes to uvgroovy.
Given file 'some-file.txt', just do:
cat some-file.txt | ssh root:1.1.1.1 /bin/bash -c "cat > /root/some-new-file.txt"
Still, if anyone know a sftp/scp built-in way to do so on client side, it'll be great.

Ansible: How to suppress SIGHUP

Ansible seems to be sending SIGHUP signals at the end of (certain?) tasks. This is a problem as the tasks call a bash script which in turn starts a server instance.
Now if the closing of Ansibles SSH session sends a SIGHUP, this will actually shutdown the server - the start of which was the key point of the Ansible task in the first place.
So, is there a way I can guarantee that Ansible will use SSH in a way that will not send a SIGHUP signal when closing the task/session?
I could theoretically start the bashscript using nohup. But this seems to be just a dirty workaround, as I know that SSH is capable of doing what I want: if I manually compose and pass the script command via SSH as such:
ssh user#server "scriptToStartMyServer.sh params"
.. then it works fine and no SIGHUP is sent (thus the server survives and is not shutdown immediately after being started).
Edit:
Sadly we cannot avoid using these bash scripts to start servers and we cannot really change them as this is something given to us by the customer.

Using microsoft telnet to copy file from remote location to local host

I am trying to use telnet to copy file from a remote location which happens to be Windows Phone 8 device.
I am using the below 2 commands.
telnet 127.0.0.1 1023 -f C:\Documents\fpsnum.txt
type "C:\Data\Users\local\log.txt"
Manually this runs fine but I require to run this through automation. I tried placing these commands in testcase.xml but it doesn't intend to do what it could manually.
I have also tried using bat file to run these 2 commands but the bat file could only launch a telnet session it couldn't execute the second command.
Any idea/suggestions to work this out?
Telnet is meant to be an interactive terminal so it probably won't work this way.
You could use a program like "socket" or "nc" to open a raw TCP session to the server port and send the command that way, capturing the output. That would allow your automation but note that the "telnet protocol", if it is actually in use, will include extra handshake bytes at the start. They're easy to strip out, though, and may not even be there depending on the OS and the program listening on that port.