Relative path do not work by script - relative-path

This is my firt post at stackoverflow.
I'm currently using xmlstartlet, with popen, to parse a XML file and return some results to me.
I want to organize the "changeable" files (xml's) inside a subfolder of my project, so I did the following:
fp = popen("xmlstarlet sel -t -m '//Program/Data' -v . -n < /DSP_DATA/test.xml", "r");
The issue is: I'm using a script to load the program and some configurations to my embedded system (headless), and when I excecute the program directly by ssh, it runs great, showing all the outputs, but when I run it by the script, it shows:
sh: 1: cannot open /DSP_DATA/test.xml: No such file
Below, there is the script used to load the excecutable:
#This Script will upload the excecutable at "Debug" Folder to the remote host and excecute it at terminal by SSH.
set REMOTE_USER "pi"
set REMOTE_IP "192.168.1.99"
#Upload Pin Configuration Script file
spawn scp -r remote.pinconf.sh $REMOTE_USER#$REMOTE_IP:/home/pi/SoftwareTestLocation
expect "password:"
send "raspberry\r"
expect "*\r"
expect "\r"
#Upload the Software
spawn scp -r ../Debug/ADAU145x.bin $REMOTE_USER#$REMOTE_IP:/home/pi/SoftwareTestLocation
expect "password:"
send "raspberry\r"
expect "*\r"
expect "\r"
#Excecute Pin Configuration Script - perform an CHMOD before
spawn ssh $REMOTE_USER#$REMOTE_IP
expect "password:"
send "raspberry\r"
expect "*\r"
expect "\r"
send -- "chmod +x ~/SoftwareTestLocation/remote.pinconf.sh\r"
send -- "sudo ./SoftwareTestLocation/remote.pinconf.sh\r"
expect "*\r"
expect "\r"
#Execute the Software
send_user "Remote Output\n---\n---\n---\n"
send -- "sudo ~/SoftwareTestLocation/ADAU145x.bin\r"
expect "*\r"
expect "END"
Please, give me any sugestions to help Discover the reason why the relative path works then I log in directly and execute the software from the folder, but won't work when I ask for execution by the script.
Thanks.

On linux /DSP_DATA/test.xml means to look at the root directory under DSP_DATA folder.
If the path is relative to the executable position you should use: ./DSP_DATA/test.xml

Related

Expect: How to use "ls -d filename" for full path name in expect?

I am trying to automate loading of image on the hardware using expect. For that I need to get full path of the image.
I am using the following syntax -
spawn ls -d $env(PWD)/build/image/bmxs.*bin
expect -re {(\S+)(\r)}
set imgpath $expect_out(1,string)
The message I get is -
spawn: returns {51875}
expect: does "" (spawn_id exp4) match regular expression "(\S+)(\r)"? no
So, it appears that the spawn does not return anything.
I've tried various syntaxes, but no use -
send "ls -d $env(PWD)/build/images/final/nxos.*bin\r"
spawn "ls -d $env(PWD)/build/image/bmxs.*bin"
puts "$LS" ### where $LS is the command.
None of these work. Am I making a mistake?
Your code suggests that
the image file is local - on same machine where you run Expect
You want the first file which matches the pattern
If this is so, you can just do
set files [glob $env(PWD)/build/image/bmxs.*bin]
set imgpath [lindex [lsort $files] 0]
On the other hand, the phrase "on the hardware" suggests that this image file is on some remote system. If so, and you already have spawned a login session there, you need to send the ls command on the existing session and then expect the output from ls. However in that case it looks strange to get the directory from $env(PWD)" as this will read the environment variable PWD on your local machine.

iTerm2: quick download over SSH using CMD+click

iTerm2 allows you to click on a link (CMD+click) and open it quickly. However, when working over SSH, this doesn't work. Is it possible to enable this functionality, so that I can CMD+click a file, and it will automatically download into a folder on my local machine?
Thanks!
This is actually possible with Shell Integration installed. Note that Shell Integration will need to be installed on any server that you are ssh'ing into, not just on your local machine. From this link:
iTerm has recently introduced a feature called Shell Integration. Using this feature, we can upload and download files conveniently directly from iTerm 2. Drag a file into the window when pressing Option Key uploads the file to the remote ssh connection. Right-click on a file using ls command will bring up a context list containing downloading the file.
Click “iTerm2->Install Shell Integration” when sshing into the remote server.
Ensure the server has a correct FQDN as hostname and can be connected through this hostname. (You can use hostname -f to check it)
If you’re using private key authentication, then you should have id_rsa in your .ssh directory. However, you should also put id_rsa.pub in your .ssh directory to use this feature.
Sorry for the late answer, but I was just trying to do the same thing and came across your question. Thought I would post my findings once I found a solution.
I've not had much success with ⌘+Clicking to download via SCP in iTerm2 because I have a complex set of rules involving jump hosts in ~/.ssh/config.
But I have found an elegant work around: a shell function which writes to STDOUT to trigger iTerm2 into capturing the output and saving it as a file!
I keep the following snippet (Toolbelt → Snippets) which I execute to define a command download:
alias download="bash <(base64 -d <<<'IyEvYmluL2Jhc2gKaWYgWyAkIyAtbHQgMSBdOyB0aGVuCiAgZWNobyAiVXNhZ2U6ICQwIGZpbGUg
Li4uIgogIGV4aXQgMQpmaQpmb3IgZmlsZW5hbWUgaW4gIiRAIgpkbwogIGlmIFsgISAtciAiJGZp
bGVuYW1lIiBdIDsgdGhlbgogICAgZWNobyBGaWxlICRmaWxlbmFtZSBkb2VzIG5vdCBleGlzdCBv
ciBpcyBub3QgcmVhZGFibGUuCiAgICBjb250aW51ZQogIGZpCgogIGZpbGVuYW1lNjQ9JChlY2hv
IC1uICIkZmlsZW5hbWUiIHwgYmFzZTY0KQogIGZpbGVzaXplPSggJCh3YyAtYyAiJHtmaWxlbmFt
ZX0iKSApCiAgcHJpbnRmICJcMDMzXTEzMzc7RmlsZT1uYW1lPSR7ZmlsZW5hbWU2NH07c2l6ZT0k
e2ZpbGVzaXplWzBdfToiCiAgYmFzZTY0IDwgIiRmaWxlbmFtZSIKICBwcmludGYgJ1xhJwpkb25l
Cg==')"
The base64-encoded string decodes to:
#!/bin/bash
if [ $# -lt 1 ]; then
echo "Usage: $0 file ..."
exit 1
fi
for filename in "$#"
do
if [ ! -r "$filename" ] ; then
echo File $filename does not exist or is not readable.
continue
fi
filename64=$(echo -n "$filename" | base64)
filesize=( $(wc -c "${filename}") )
printf "\033]1337;File=name=${filename64};size=${filesize[0]}:"
base64 < "$filename"
printf '\a'
done
Which relies on iTerm2's download protocol
Sample session showing the notifications from iTerm2:

Expect ssh and create directory

I'm having some trouble with expect.
I'm trying to ssh onto another machine and then create a directory on that machine.
Right now this is what my code looks like:
spawn ssh username#ipAddress
expect "password"
send "password"
file mkdir directoryName
That code is giving me a "permission denied".
When I try replacing
file mkdir directoryName
with
send "mkdir directoryName"
There's no error, but it doesn't make a file.
Thanks.
This might help you :-
#!/usr/bin/expect
set timeout -1
spawn -noecho bash -c "ssh username#serveraddress 'cd /user/bill/work;<your=command>'"
expect {
-re "assword:"{
send "mypassword/r"
}eof{
wait
}
You must send the command inside ssh as it will run on remote machine.
Explanation for above script :-
set timeout -1will set this loop in infinite (but it will exit once spawn process is finished.
-re will match regex for assword:
eof will wait until spawn is finish.
After sending mkdir command, wait for eof to happen.
send "mkdir directoryName\r"
expect eof

SSH – Force Command execution on login even without Shell

I am creating a restricted user without shell for port forwarding only and I need to execute a script on login via pubkey, even if the user is connected via ssh -N user#host which doesn't asks SSH server for a shell.
The script should warn admin on connections authenticated with pubkey, so the user connecting shouldn't be able to skip the execution of the script (e.g., by connecting with ssh -N).
I have tried to no avail:
Setting the command at /etc/ssh/sshrc.
Using command="COMMAND" in .ssh/authorized_keys (man authorized_keys)
Setting up a script with the command as user's shell. (chsh -s /sbin/myscript.sh USERNAME)
Matching user in /etc/ssh/sshd_config like:
Match User MYUSERNAME
ForceCommand "/sbin/myscript.sh"
All work when user asks for shell, but if logged only for port forwarding and no shell (ssh -N) it doesn't work.
The ForceCommand option runs without a PTY unless the client requests one. As a result, you don't actually have a shell to execute scripts the way you might expect. In addition, the OpenSSH SSHD_CONFIG(5) man page clearly says:
The command is invoked by using the user's login shell with the -c option.
That means that if you've disabled the user's login shell, or set it to something like /bin/false, then ForceCommand can't work. Assuming that:
the user has a sensible shell defined,
that your target script is executable, and
that your script has an appropriate shebang line
then the following should work in your global sshd_config file once properly modified with the proper username and fully-qualified pathname to your custom script:
Match User foo
ForceCommand /path/to/script.sh
If you only need to run a script you can rely on pam_exec.
Basically you reference the script you need to run in the /etc/pam.d/sshd configuration:
session optional pam_exec.so seteuid /path/to/script.sh
After some testing you may want to change optional to required.
Please refer to this answer "bash - How do I set up an email alert when a ssh login is successful? - Ask Ubuntu" for a similar request.
Indeed in the script only a limited subset on the environment variables is available:
LANGUAGE=en_US.UTF-8
PAM_USER=bitnami
PAM_RHOST=192.168.1.17
PAM_TYPE=open_session
PAM_SERVICE=sshd
PAM_TTY=ssh
LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8
PWD=/
If you want to get the user info from authorized_keys this script could be helpful:
#!/bin/bash
# Get user from authorized_keys
# pam_exec_login.sh
# * [ssh - What is the SHA256 that comes on the sshd entry in auth.log? - Server Fault](https://serverfault.com/questions/888281/what-is-the-sha256-that-comes-on-the-sshd-entry-in-auth-log)
# * [bash - How to get all fingerprints for .ssh/authorized_keys(2) file - Server Fault](https://serverfault.com/questions/413231/how-to-get-all-fingerprints-for-ssh-authorized-keys2-file)
# Setup log
b=$(basename $0| cut -d. -f1)
log="/tmp/${b}.log"
function timeStamp () {
echo "$(date '+%b %d %H:%M:%S') ${HOSTNAME} $b[$$]:"
}
# Check if opening a remote session with sshd
if [ "${PAM_TYPE}" != "open_session" ] || [ $PAM_SERVICE != "sshd" ] || [ $PAM_RHOST == "::1" ]; then
exit $PAM_SUCCESS
fi
# Get info from auth.log
authLogLine=$(journalctl -u ssh.service |tail -100 |grep "sshd\[${PPID}\]" |grep "${PAM_RHOST}")
echo ${authLogLine} >> ${log}
PAM_USER_PORT=$(echo ${authLogLine}| sed -r 's/.*port (.*) ssh2.*/\1/')
PAM_USER_SHA256=$(echo ${authLogLine}| sed -r 's/.*SHA256:(.*)/\1/')
# Get details from .ssh/authorized_keys
authFile="/home/${PAM_USER}/.ssh/authorized_keys"
PAM_USER_authorized_keys=""
while read l; do
if [[ -n "$l" && "${l###}" = "$l" ]]; then
authFileSHA256=$(ssh-keygen -l -f <(echo "$l"))
if [[ "${authFileSHA256}" == *"${PAM_USER_SHA256}"* ]]; then
PAM_USER_authorized_keys=$(echo ${authFileSHA256}| cut -d" " -f3)
break
fi
fi
done < ${authFile}
if [[ -n ${PAM_USER_authorized_keys} ]]
then
echo "$(timeStamp) Local user: ${PAM_USER}, authorized_keys user: ${PAM_USER_authorized_keys}" >> ${log}
else
echo "$(timeStamp) WARNING: no matching user in authorized_keys" >> ${log}
fi
I am the author of the OP; I came to the conclusion that what I need to achieve is not possible using SSH only to the date (OpenSSH_6.9p1 Ubuntu-2, OpenSSL 1.0.2d 9 Jul 2015), but I found a great piece of software that uses encrypted SPAuthentication to open SSH port and it's new version (to the date of this post, it's GitHub master branch) has a feature to execute a command always that a user authorizates successfully.
FWKNOP - Encrypted Single Packet Authorization
FWKNOP set iptables rules that allow access to given ports upon a single packet encrypted which is sent via UDP. Then after authorization it allow access for the authorized user for a given time, for example 30 seconds, closing the port after this, leaving the connection open.
1. To install on an Ubuntu linux:
The current version (2.6.0-2.1build1) on Ubuntu repositories to the date still doesn't allow command execution on successful SPA; (please use 2.6.8 from GitHub instead)
On client machine:
sudo apt-get install fwknop-client
On server side:
sudo apt-get install fwknop-server
Here is a tutorial on how to setup the client and server machines
https://help.ubuntu.com/community/SinglePacketAuthorization
Then, after it is set up, on server side:
Edit /etc/default/fwknop-server
Change the line START_DAEMON="no" to START_DAEMON="yes"
Then run:
sudo service fwknop-server stop
sudo service fwknop-server start
2. Warning admin on successful SPA (email, pushover script etc)
So, as stated above the current version present in Ubuntu repositories (2.6.0-2.1build1) cannot execute command on successful SPA. If you need this feature as of the OP, but it will be released at fwknop version (2.6.8), as can it is stated here:
https://github.com/mrash/fwknop/issues/172
So if you need to use it right now you can build from github branch master which have the CMD_CYCLE_OPEN option.
3. More resources on fwknop
https://help.ubuntu.com/community/SinglePacketAuthorization
https://github.com/mrash/fwknop/ (project on GitHub)
http://www.cipherdyne.org/fwknop/ (project site)
https://www.digitalocean.com/community/tutorials/how-to-use-fwknop-to-enable-single-packet-authentication-on-ubuntu-12-04 (tutorial on DO's community)
I am the author of the OP. Also, you can implement a simple logwatcher as the following written in python3, which keeps reading for a file and executes a command when line contains pattern.
logwatcher.python3
#!/usr/bin/env python3
# follow.py
#
# Follow a file like tail -f.
import sys
import os
import time
def follow(thefile):
thefile.seek(0,2)
while True:
line = thefile.readline()
if not line:
time.sleep(0.5)
continue
yield line
if __name__ == '__main__':
logfilename = sys.argv[1]
pattern_string = sys.argv[2]
command_to_execute = sys.argv[3]
print("Log filename is: {}".format(logfilename))
logfile = open(logfilename, "r")
loglines = follow(logfile)
for line in loglines:
if pattern_string in line:
os.system(command_to_execute)
Usage
Make the above script executable:
chmod +x logwatcher.python3
Add a cronjob to start it after reboot
crontab -e
Then write this line there and save it after this:
#reboot /home/YOURUSERNAME/logwatcher.python3 "/var/log/auth.log" "session opened for user" "/sbin/myscript.sh"
The first argument of this script is the log file to watch, and the second argument is the string for which to look in it. The third argument is the script to execute when the line is found in file.
It is best if you use something more reliable to start/restart the script in case it crashes.

wildcard in expect script doesn't work

I have the following script running successfully. However if I try to use a wildcard, to copy multiple files, it throws an error, saying “No such file or directory”.
This code works:
#!/usr/bin/expect -f
spawn scp file1.txt root#192.168.1.156:/temp1/.
expect "password:"
send "iamroot\r"
expect "*\r"
expect "\r"
The following doesn't work:
#!/usr/bin/expect -f
spawn scp * root#192.168.1.156:/temp/. #fails here
….
The * is usually expanded by the shell (bash), but in this case you shell is expect. I suspect that expect is not expanding the *.
try:
spawn bash -c 'scp * root#192.168.1.156:/temp/.'
explanation:
#!/usr/bin/expect -f
spawn echo *
expect "*"
spawn bash -c 'echo *'
expect "file1 file2…"
AFAIK scp defaults to file copy while bash might expand * to directories also, if any is found in the current path.
Perhaps trying a -r (recursive) could solve your problem (not sure as I can't test the scenario right now)?
Or if you do not want to copy the whole folder structure, you could use scp *.txt ... depending on your needs.