Using tcl/expect to prepare a tunnel - ssh

As part of a bigger plan (jumping through a bunch of hops and then create a port-tunnel to mongodb in a setup that PortForwarding is disabled) I attempted to create a tcl/expect script to verify if it is possible to relay a stream prepared by tcl/expect.
Here is my experiment:
# terminal 1 [listen to 2000]
nc -l 2000
# terminal 2 [listen to 200 then connect it to 2000 using expect]
socat tcp-l:200 system:'./nc-test.exp'
# terminal 3 [connect to 200]
nc localhost 200
and my tcl/expect simple script (nc-test.exp):
#!/usr/local/bin/expect
log_user 0
spawn nc localhost 2000
stty raw -echo
interact -o -nobuffer
Now the issue is everything I write in terminal 3 echos back to myself. Strangely this doesn't happen when I connect socat directly to nc localhost 2000 or when I directly execute tcl/expect script. Could you please help me figure
What is causing the unwanted echo?
Is my bigger plan feasible? (My main worry is keeping the stream raw)

The stty command in the expect script is acting on /dev/tty, which is probably the tty in terminal 2. However, spawn creates another pty to talk with the command it launches. That tty will inherit from the current tty, i.e. terminal 2, so will have echo on. It might be enough to simply move the stty raw -echo line to before the spawn, or more explicitly you can set the stty setting to be used by spawn by a command like
set stty_init "raw -echo"
before the spawn.

Related

run noninteractive ssh command in background immeidately suspends the job

(1) I can run the following command and get the output successfully
ssh server hostname
(2) If I run it in background (not to background hotname, but to background ssh)
ssh server hostname &
and do nothing other than wait, I can get the output
(3) However, if before it finishes I type any key to the terminal, the job immediately turns into suspended state
[ZSH] suspended (tty input) ssh server hostname
[BASH] Stopped ssh server hostname
What is the reason for this and how to solve it?
I just use hostname as an example. You can try using sleep 5 instead if the program returns too quickly. The actual program I want to run lasts for minutes.
Use ssh -T -f server hostname as the manual page states:
-f requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or
passphrases, but the user wants it in the background. This
implies -n. The recommended way to start X11 programs at a
remote site is with something like ssh -f host xterm.
If the ExitOnForwardFailure configuration option is set to “yes”,
then a client started with -f will wait for all remote port for-
wards to be successfully established before placing itself in the
background.
-T Disable pseudo-tty allocation.

SSH to multiple hosts at once

I have a script which loops through a list of hosts, connecting to each of them with SSH using an RSA key, and then saving the output to a file on my local machine - this all works correctly. However, the commands to run on each server take a while (~30 minutes) and there are 10 servers. I would like to run the commands in parallel to save time, but can't seem to get it working. Here is the code as it is now (working):
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh"
done
How can I speed this up?
You should add & to the end of the ssh call, it will run on the background.
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh" &
done
I tried using & to send the SSH commands to the background, but I abandoned this because after the SSH commands are completed, the script performs some more commands on the output files, which need to have been created.
Using & made the script skip directly to those commands, which failed because the output files were not there yet. But then I learned about the wait command which waits for background commands to complete before continuing. Now this is my code which works:
for host in $HOSTS; do
echo "Connecting to $host"..
ssh -n -t -t $USER#$host "/data/reports/formatted_report.sh" &
done
wait
Try massh http://m.a.tt/er/massh/. This is a nice tool to run ssh across multiple hosts.
The Hypertable project has recently added a multi-host ssh tool. This tool is built with libssh and establishes connections and issues commands asynchronously and in parallel for maximum parallelism. See Multi-Host SSH Tool for complete documentation. To run a command on a set of hosts, you would run it as follows:
$ ht ssh host00,host01,host02 /data/reports/formatted_report.sh
You can also specify a host name or IP pattern, for example:
$ ht ssh 192.168.17.[1-99] /data/reports/formatted_report.sh
$ ht ssh host[00-99] /data/reports/formatted_report.sh
It also supports a --random-start-delay <millis> option that will delay the start of the command on each host by a random time interval between 0 and <millis> milliseconds. This option can be used to avoid thundering herd problems when the command being run accesses a central resource.

Tcl Expect fails spawning SSH to server but SSH from command line works

I have some code that I'm using to connect to a server and perform some commands. The code is as follows:
#!/usr/bin/expect
log_file ./log_std.log
proc setPassword {oldPass newPass} {
send -- "passwd\r"
expect "* Old password:"
send -- "$oldPass\r"
expect "* New password:"
send -- "$newPass\r"
expect "* new password again:"
send -- "$newPass\r"
}
set server [lindex $argv 0]
spawn /bin/ssh perfgen#$server
# Increase buffer size to support large text responses
match_max 100000
# Conditionally expects a prompt for host authenticity
expect {
"*The authenticity of host*" {
send -- "yes\r"
}
}
What I find very strange is that when I SSH from my command line the SSH command works no problem. However, when I SSH from the shell script I get the following error:
spawn /bin/ssh perfgen#192.168.80.132
ssh: Could not resolve hostname 192.168.80.132
: Name or service not known
The same script runs against 3 servers, but 2 of the 3 servers always fail. However, if I try logging into the servers manually do do the work all three servers pass.
Any idea what might be happening here? I'm completely stumped. This code was working up until about 2 weeks ago and according to the server administrator nothing has changed on the server-side config.
Trimming any whitespace seemed to solve the issue:
set serverTrimmed [string trim $server]

keep server running on EC2 instance after ssh is terminated

Currently, I have two servers running on an EC2 instance (MongoDB and bottlepy). Everything works when I SSHed to the instance and started those two servers. However, when I closed the SSH session (the instance is still running), I lost those two servers. Is there a way to keep the server running after logging out? I am using Bitvise Tunnelier on Windows 7.
The instance I am using is Ubuntu Server 12.04.3 LTS.
For those landing here from a google search, I would like to add tmux as another option. tmux is widely used for this purpose, and is preinstalled on new Ubuntu EC2 instances.
Managing a single session
Here is a great answer by Hamish Downer given to a similar question over at askubuntu.com:
I would use a terminal multiplexer - screen being the best known, and tmux being a more recent implementation of the idea. I use tmux, and would recommend you do to.
Basically tmux will run a terminal (or set of terminals) on a computer. If you run it on a remote server, you can disconnect from it without the terminal dying. Then when you login in again later you can reconnect, and see all the output you missed.
To start it the first time, just type
tmux
Then, when you want to disconnect, you do Ctrl+B, D (ie press Ctrl+B, then release both keys, and then press d)
When you login again, you can run
tmux attach
and you will reconnect to tmux and see all the output that happened. Note that if you accidentally lose the ssh connection (say your network goes down), tmux will still be running, though it may think it is still attached to a connection. You can tell tmux to detach from the last connection and attach to your new connection by running
tmux attach -d
In fact, you can use the -d option all the time. On servers, I have this in my >.bashrc
alias tt='tmux attach -d'
So when I login I can just type tt and reattach. You can go one step further >if you want and integrate the command into an alias for ssh. I run a mail client >inside tmux on a server, and I have a local alias:
alias maileo='ssh -t mail.example.org tmux attach -d'
This does ssh to the server and runs the command at the end - tmux attach -d The -t option ensures that a terminal is started - if a command is supplied then it is not run in a terminal by default. So now I can run maileo on a local command line and connect to the server, and the tmux session. When I disconnect from tmux, the ssh connection is also killed.
This shows how to use tmux for your specific use case, but tmux can do much more than this. This tmux tutorial will teach you a bit more, and there is plenty more out there.
Managing multiple sessions
This can be useful if you need to run several processes in the background simultaneously. To do this effectively, each session will be given a name.
Start (and connect to) a new named session:
tmux new-session -s session_name
Detach from any session as described above: Ctrl+B, D.
List all active sessions:
tmux list-sessions
Connect to a named session:
tmux attach-session -t session_name
To kill/stop a session, you have two options. One option is to enter the exit command while connected to the session you want to kill. Another option is by using the command:
tmux kill-session -t session_name
If you don't want to run some process as a service (or via an apache module) you can (like I do for using IRC) use gnome-screen Install screen http://hostmar.co/software-small.
screen keeps running on your server even if you close the connection - and thus every process you started within will keep running too.
It would be nice if you provided more info about your environment but assuming it's Ubuntu Linux you can start the services in the background or as daemons.
sudo service mongodb start
nohup python yourbottlepyapp.py &
(Use nohup if you want are in a ssh session and want to prevent it from closing file descriptors)
You can also run your bottle.py app using Apache mod_wsgi. (Running under the apache service) More info here: http://bottlepy.org/docs/dev/deployment.html
Hope this helps.
Addition: (your process still runs after you exit the ssh session)
Take this example time.py
import time
time.sleep(3600)
Then run:
$ python3 time.py &
[1] 3027
$ ps -Af | grep -v grep | grep time.py
ubuntu 3027 2986 0 18:50 pts/3 00:00:00 python3 time.py
$ exit
Then ssh back to the server
$ ps -Af | grep -v grep | grep time.py
ubuntu 3027 1 0 18:50 ? 00:00:00 python3 time.py
Process still running (notice with no tty)
You will want the started services to disconnect from the controlling terminal. I would suggest you use nohup to do that, e.g.
ssh my.server "/bin/sh -c nohup /path/to/service"
you may need to put an & in there (in the quotes) to run it in the background.
As others have commented, if you run proper init scripts to start/stop services (or ubuntu's service command), you should not see this issue.
If on Linux based instances putting the job in the background followed by disown seems to do the job.
$ ./script &
$ disown

Forwarding signal to remote child from local parent over ssh

I have a script which executes remote command and redirect output to local file.
Remote command just reads list of pcap files continuously and writes to stdout.
The final command is like this -
ssh root#host /sbin/path-to-utility | cat > local-file
The script which executes this remote command needs to have signal handler to save the state of overall transfer.
Also I want to send signal to remote command or process to stop reading pcap files, so that exit after finishing writing current file.
I tried -t option and signal handling works perfectly fine, but it adds some extra characters to the actual output written by remote command and disturbs my pcap data.
Either I need to handle signal without -t option over ssh or I need to find out why ssh -t is adding additional bytes to actual data.
Please help!
Thanks,
Sachin.
The -t option tells ssh to allocate a terminal. The extra characters are intended to be interpreted by your terminal.
Perhaps then you should tell ssh you are using a different terminal, one that will not generate any extra characters, but will still pass on signals. Does this work?
TERM=dumb ssh root#host /sbin/path-to-utility | cat > local-file
(I don't know what would be the best value to use for TERM.)