Using expect script to do an ssh from a remote machine - ssh

I am new to expect scripts and have a use case in which I need to do an ssh from a machine in which I have already done an ssh using expect script. This is my code snippet
#!/usr/bin/expect -f
set timeout 60
spawn ssh username#machine1.domain.com
expect "Password: "
send "Password\r"
send "\r" # This is successful. I am able to login successfully to the first machine
set timeout 60
spawn ssh username#machine2.domain.com #This fails
This takes a some amount of time and fails saying
ssh: connect to host machine2.domain.com port 22: Operation timed out. I understand that 22 is the default port on which ssh runs and I can manually override it by giving a -p option to ssh.
If I try to ssh independently without the expect script I get a prompt that asks me to enter (yes/no). From where is the correct port being picked up if I execute ssh directly without the expect script. If I do not need to enter the port number on shell why would it be needed to enter a port number if I am using an expect script.

That that point, you don't spawn a new ssh: spawn creates a new process on your local machine. You just send a command to the remote server
#!/usr/bin/expect -f
set timeout 60
spawn ssh username#machine1.domain.com
expect "Password: "
send "Password\r"
send "\r" # This is successful. I am able to login successfully to the first machine
# at this point, carry on scripting the first ssh session:
send "ssh username#machine2.domain.com\r"
expect ...

Related

Expect script not working and terminal closes immediately

I don't know what's wrong with the script. I set up a new profile on Iterm terminal to run the script, but it never works and closes immediately. Here's the script:
#!/usr/bin/expect -f
set timeout 120
set secret mysecret
set username asdf
set host {123.456.789.010}
set password password123
log_user 0
spawn oathtool --totp --base32 $secret
expect -re \\d+
sleep 400
set otp $expect_out(0,string)
spawn ssh -2 $username#$host
expect "*assword:*"
send "$password\n"
expect "Enter Google Authenticator code:"
send "$otp\n"
interact
First, test you ssh connection with:
ssh -v <auser>#<apassword>
That will validate the SSH session works.
Make sure to not use ssh -T ..., since you might need a terminal for expect commands to work.
Second, add at least an echo at the beginning of the script, to see if it is called:
puts "Script running\r"
Third, see if a bash script, with part of it using expect as in here, would work better in this case

How to check SSH credentials are working or not

I have a large number of devices around 300
I have different creds to them
SSH CREDS, API CREDS
So as I cannot manually SSH to all those devices and check the creds are working or not
I am thinking of writing a script and pass the device IP's to the script and which gives me as yes as a result if the SSH creds are working and NO if not working.
I am new to all this stuff! details will be appreciated!
I will run this script on a server from where I can ssh to all the devices.
Your question isn't clear as to what sort of credentials you use for connecting to each host: do all hosts have the same connection method, for instance?
Let's assume that you use ssh's authorised keys method to log in to each host (i.e. you have a public key on each host within the ~/.ssh/authorized_keys file). You can run ssh with a do nothing command against each host and look at the exit code to see if the connection was successful.
HOST=1.2.3.4
ssh -i /path/to/my/private.key user#${HOST} true > /dev/null 2>&1
if [ $? -ne 0]; then echo "Error, could not connect to ${HOST}"; fi
Now it's just a case of wrapping this in some form of loop where you cycle through each host (and choose the right key for each host, perhaps you could name each private key after the name or IP address of the target host). The script will go out all those hosts for which a connection was not possible. Note that this script assumes that true is available on the target host, otherwise you could use ls or similar. We pipe all output to /dev/null/ as we're only interested in the ability to connect.
EDIT IN RESPONSE TO OP CLARIFICATION:
I'd strongly recommend not using username/password for login, as the username and password will likely be held in your script somewhere, or even in your shell history, if you run the command from the command line. If you must do this, then you could use expect or sshpass, as detailed here: https://srvfail.com/how-to-provide-ssh-password-inside-a-script-or-oneliner/
The ssh command shown does not spawn a shell, it literally logs in to the remote server, executes the command true (or ls, etc), then exits. You can use the return code ($? in bash) to check whether the command executed correctly. My example shows it printing out an error message for non-zero return codes, but to print out YES on successful connection, you could do this:
if [ $? -eq 0]; then echo "${HOST}: YES"; fi

run noninteractive ssh command in background immeidately suspends the job

(1) I can run the following command and get the output successfully
ssh server hostname
(2) If I run it in background (not to background hotname, but to background ssh)
ssh server hostname &
and do nothing other than wait, I can get the output
(3) However, if before it finishes I type any key to the terminal, the job immediately turns into suspended state
[ZSH] suspended (tty input) ssh server hostname
[BASH] Stopped ssh server hostname
What is the reason for this and how to solve it?
I just use hostname as an example. You can try using sleep 5 instead if the program returns too quickly. The actual program I want to run lasts for minutes.
Use ssh -T -f server hostname as the manual page states:
-f requests ssh to go to background just before command execution.
This is useful if ssh is going to ask for passwords or
passphrases, but the user wants it in the background. This
implies -n. The recommended way to start X11 programs at a
remote site is with something like ssh -f host xterm.
If the ExitOnForwardFailure configuration option is set to “yes”,
then a client started with -f will wait for all remote port for-
wards to be successfully established before placing itself in the
background.
-T Disable pseudo-tty allocation.

Tcl Expect fails spawning SSH to server but SSH from command line works

I have some code that I'm using to connect to a server and perform some commands. The code is as follows:
#!/usr/bin/expect
log_file ./log_std.log
proc setPassword {oldPass newPass} {
send -- "passwd\r"
expect "* Old password:"
send -- "$oldPass\r"
expect "* New password:"
send -- "$newPass\r"
expect "* new password again:"
send -- "$newPass\r"
}
set server [lindex $argv 0]
spawn /bin/ssh perfgen#$server
# Increase buffer size to support large text responses
match_max 100000
# Conditionally expects a prompt for host authenticity
expect {
"*The authenticity of host*" {
send -- "yes\r"
}
}
What I find very strange is that when I SSH from my command line the SSH command works no problem. However, when I SSH from the shell script I get the following error:
spawn /bin/ssh perfgen#192.168.80.132
ssh: Could not resolve hostname 192.168.80.132
: Name or service not known
The same script runs against 3 servers, but 2 of the 3 servers always fail. However, if I try logging into the servers manually do do the work all three servers pass.
Any idea what might be happening here? I'm completely stumped. This code was working up until about 2 weeks ago and according to the server administrator nothing has changed on the server-side config.
Trimming any whitespace seemed to solve the issue:
set serverTrimmed [string trim $server]

How to use ssh to run a local command after connection and quit after this local command is executed?

I wish to use SSH to establish a temporary port forward, run a local command and then quit, closing the ssh connection.
The command has to be run locally, not on the remote site.
For example consider a server in a DMZ and you need to allow an application from your machine to connect to port 8080, but you have only SSH access.
How can this be done?
Assuming you're using OpenSSH from the command line....
SSH can open a connection that will sustain the tunnel and remain active for as long as possible:
ssh -fNT -Llocalport:remotehost:remoteport targetserver
You can alternately have SSH launch something on the server that runs for some period of time. The tunnel will be open for that time. The SSH connection should remain after the remote command exits for as long as the tunnel is still in use. If you'll only use the tunnel once, then specify a short "sleep" to let the tunnel expire after use.
ssh -f -Llocalport:remotehost:remoteport targetserver sleep 10
If you want to be able to kill the tunnel from a script running on the local side, then I recommend you background it in your shell, then record the pid to kill later. Assuming you're using an operating system that includes Bourne shell....
#/bin/sh
ssh -f -Llocalport:remotehost:remoteport targetserver sleep 300 &
sshpid=$!
# Do your stuff within 300 seconds
kill $sshpid
If backgrounding your ssh using the shell is not to your liking, you can also use advanced ssh features to control a backgrounded process. As described here, the SSH features ControlMaster and ControlPath are how you make this work. For example, add the following to your ~/.ssh/config:
host targetserver
ControlMaster auto
ControlPath ~/.ssh/cm_sockets/%r#%h:%p
Now, your first connection to targetserver will set up a control, so that you can do things like this:
$ ssh -fNT -Llocalport:remoteserver:remoteport targetserver
$ ssh -O check targetserver
Master running (pid=23450)
$ <do your stuff>
$ ssh -O exit targetserver
Exit request sent.
$ ssh -O check targetserver
Control socket connect(/home/sorin/.ssh/cm_socket/sorin#192.0.2.3:22): No such file or directory
Obviously, these commands can be wrapped into your shell script as well.
You could use a script similar to this (untested):
#!/bin/bash
coproc ssh -L 8080:localhost:8080 user#server
./run-local-command
echo exit >&${COPROC[1]}
wait