Expect: How to use "ls -d filename" for full path name in expect? - scripting

I am trying to automate loading of image on the hardware using expect. For that I need to get full path of the image.
I am using the following syntax -
spawn ls -d $env(PWD)/build/image/bmxs.*bin
expect -re {(\S+)(\r)}
set imgpath $expect_out(1,string)
The message I get is -
spawn: returns {51875}
expect: does "" (spawn_id exp4) match regular expression "(\S+)(\r)"? no
So, it appears that the spawn does not return anything.
I've tried various syntaxes, but no use -
send "ls -d $env(PWD)/build/images/final/nxos.*bin\r"
spawn "ls -d $env(PWD)/build/image/bmxs.*bin"
puts "$LS" ### where $LS is the command.
None of these work. Am I making a mistake?

Your code suggests that
the image file is local - on same machine where you run Expect
You want the first file which matches the pattern
If this is so, you can just do
set files [glob $env(PWD)/build/image/bmxs.*bin]
set imgpath [lindex [lsort $files] 0]
On the other hand, the phrase "on the hardware" suggests that this image file is on some remote system. If so, and you already have spawned a login session there, you need to send the ls command on the existing session and then expect the output from ls. However in that case it looks strange to get the directory from $env(PWD)" as this will read the environment variable PWD on your local machine.

Related

Running sqlcmd in batch file works but running the same batch file as a scheduled task works and does nothing

I have looked at many SO questions/answers and though some seem similar to my issue they do not seem to be. The answers given fix issues the questions were asking about but will not solve my issue.
I have a batch file...
#ECHO ON
ECHO Disabling the following... >> C:\App\Debug.log
ECHO - V1 >> C:\Apps\Debug.log
FOR /F "tokens=* USEBACKQ" %%F IN (`sqlcmd -j -S DOMAIN\SQLSERVER -U username -P password -d DBNAME -Q "UPDATE [DBNAME].[dbo].[table1] SET ColOne='V1_OFF' WHERE ColOne='V1'"`) DO (
Echo %%F >> C:\Apps\Debug.log
)
EXIT /B
When I run this file at the command prompt it works perfectly fine. When I run it as a scheduled task it show me the echos but nothing for the for loop as expected.
Yes I have made sure the username (using whoami) is the same for the scheduled task set up as the manual run that I do.
Yes I know the user running the script has rights to everything (file access as well as DB access) because it works fine running it from the command prompt.
Scheduled task is set to run wither user is logged on or not.
Any ideas what might be wrong or what I can try for debugging purposes?
Thanks!
sqlcmd is perhaps not enough. cmd.exe in environment of scheduled task may fail to find the executable using local PATHEXT and local PATH environment variables. The executable should be specified with full qualified file name, i.e. drive + path + name + extension. Then the batch file does not anymore depend on the environment variables PATH and PATHEXT because of all files are referenced with full qualified file name.
for executes the specified command line with starting in background one more command process with %ComSpec% /c and the specified command line appended. This means executed is following with Windows installed on drive C::
C:\Windows\System32\cmd.exe /c sqlcmd -j -S DOMAIN\SQLSERVER -U username -P password -d DBNAME -Q "UPDATE [DBNAME].[dbo].[table1] SET ColOne='V1_OFF' WHERE ColOne='V1'"
for captures everything written to handle STDOUT of started command process. The lines of captured output are processed line by line by for after started cmd.exe terminated itself. Error messages output by started cmd.exe or the commands/executables executed by Windows command processor in background to handle STDERR are redirected to handle STDERR of command process processing the batch file and printed to console. But there is no console window on running a batch file as scheduled task. So error messages cannot be seen in this case.
The for command line can be modified easily here to get also error messages written into the C:\Apps\Debug.log.
FOR /F "tokens=* USEBACKQ" %%F IN (`sqlcmd -j -S DOMAIN\SQLSERVER -U username -P password -d DBNAME -Q "UPDATE [DBNAME].[dbo].[table1] SET ColOne='V1_OFF' WHERE ColOne='V1' 2^>^&1"`) DO (
The Microsoft article Using command redirection operators explains 2>&1. The two operators > and & must be escaped with ^ to be interpreted as literal characters on Windows command processor parsing the for command line before executing finally for which executes next %ComSpec% /c with the specified command line on which 2^>^&1 is changed already to 2>&1.
Does the log file C:\App\Debug.log contain with this modification following two lines?
'sqlcmd' is not recognized as an internal or external command,
operable program or batch file.
Yes, then no executable with file name sqlcmd is found by started cmd.exe. The best solution is referencing this executable with full qualified file name. See also: What is the reason for "X is not recognized as an internal or external command, operable program or batch file"?
Otherwise sqlcmd outputs perhaps an error message which should be now also in the log file C:\App\Debug.log.
It would be also possible to use following command line to let background cmd.exe write the error messages into a separate error log file C:\App\Error.log:
FOR /F "tokens=* USEBACKQ" %%F IN (`sqlcmd -j -S DOMAIN\SQLSERVER -U username -P password -d DBNAME -Q "UPDATE [DBNAME].[dbo].[table1] SET ColOne='V1_OFF' WHERE ColOne='V1'" 2^>C:\App\Error.log`) DO (
"tokens=* usebackq" results in first deleting all leading horizontal tabs and normal spaces on non-empty lines by for, then checking if the remaining line starts with ; in which case the line is also ignored and finally assigning the captured line not starting with ; and with leading tabs/spaces removed to loop variable F for further processing.
Better would be using the options usebackq^ delims^=^ eol^= not enclosed in double quotes which requires escaping the two spaces and the two equal signs with caret character ^ to be interpreted as literal characters by cmd.exe on parsing the command line before executing for. The line splitting behavior is disabled completed with delims= because of the definition of an empty list of delimiters. And no line except an empty line is ignored anymore because of end of line character modified from default ; to no character.
Finally a space on an echo line left to redirection operator >> is also output by echo and for that reason written as trailing space into the log file. Therefore no space should be used left to > or >> on printing a line with echo redirected into a file. But care must be taken on omitting the space character left to the redirection operator. The word left to redirection operator should not be 1, 2, ..., 9 as this would result in redirecting the output to these numbered handles into the specified file instead of the character 1, 2, etc. So if unknown text should be written into a file, it is better to specify first the redirection operator > or >> and the full qualified file name and next the echo command with the text to output. See also: Why does ECHO command print some extra trailing space into the file?
The three command lines with echo would be for this batch file:
ECHO Disabling the following...>> C:\App\Debug.log
ECHO - V1>> C:\Apps\Debug.log
>>C:\Apps\Debug.log ECHO %%F
following... is safe for being correct written into the file as also V1. %%F could be just 1 or a string ending with a space and a single digit and so it is better to specify the redirection first on the last echo command line to get finally executed by cmd.exe the command line ECHO %%F 1>>C:\Apps\Debug.log.

ssh to a server and create a directory based off a variable - all in one line

so i have a simple script that lists the folder and file structure of the current directory and spits it out to a file in the current users home directory, then rsyncs that file to a remote server into a specific folder.
the first part of the script SSH's into the remote server and creates a unique folder that the later part of the script transfers the file into.
#ssh -p 12345 sftp.domain.com ' bash -c "mkdir incoming/[foldername]" '
my question is, how can i pass a variable to this? i would usually put this in the script, and then run the script like this "copy.sh $1":
#ssh -p 12345 sftp.domain.com ' bash -c "mkdir incoming/folder-$1" '
however it doesn't work like i might hope it would. all i end up with is a folder on the remote server named "folder-" as it presumably doesn't pass the variable along with the rest when it ssh's in.
is there a better way to make this work?
the rest of the script would also reference the variable $1 to actually copy the file into the folder created on the remote server earlier in the script.
If I understand the problem correctly, the parameter you are trying to reference is set on the local client side (the command line from where you initiate the ssh connection), but you want to reference it in the command line that is to run on the remote server side. This really has nothing to do with ssh and everything to do with shell parameter/variable expansion on the local client side.
The problem is with your usage of single quotes vs. double quotes. Most Unix command shells, including bash which is likely the shell you are running on the local client side, perform environment variable expansion inside of double quotes but not inside of single quotes. So in your command line you should be able to accomplish your goal by changing the single quotes to double quotes and then escaping the embedded double quote characters like this:
#ssh -p 12345 sftp.domain.com " bash -c \"mkdir incoming/folder-$1\" "
Here is a similar example that shows this in action:
$ export EXAMPLE=abc
$ ssh localhost ' bash -c "echo $EXAMPLE def" '
def
$ ssh localhost " bash -c \"echo $EXAMPLE def\" "
abc def

Relative path do not work by script

This is my firt post at stackoverflow.
I'm currently using xmlstartlet, with popen, to parse a XML file and return some results to me.
I want to organize the "changeable" files (xml's) inside a subfolder of my project, so I did the following:
fp = popen("xmlstarlet sel -t -m '//Program/Data' -v . -n < /DSP_DATA/test.xml", "r");
The issue is: I'm using a script to load the program and some configurations to my embedded system (headless), and when I excecute the program directly by ssh, it runs great, showing all the outputs, but when I run it by the script, it shows:
sh: 1: cannot open /DSP_DATA/test.xml: No such file
Below, there is the script used to load the excecutable:
#This Script will upload the excecutable at "Debug" Folder to the remote host and excecute it at terminal by SSH.
set REMOTE_USER "pi"
set REMOTE_IP "192.168.1.99"
#Upload Pin Configuration Script file
spawn scp -r remote.pinconf.sh $REMOTE_USER#$REMOTE_IP:/home/pi/SoftwareTestLocation
expect "password:"
send "raspberry\r"
expect "*\r"
expect "\r"
#Upload the Software
spawn scp -r ../Debug/ADAU145x.bin $REMOTE_USER#$REMOTE_IP:/home/pi/SoftwareTestLocation
expect "password:"
send "raspberry\r"
expect "*\r"
expect "\r"
#Excecute Pin Configuration Script - perform an CHMOD before
spawn ssh $REMOTE_USER#$REMOTE_IP
expect "password:"
send "raspberry\r"
expect "*\r"
expect "\r"
send -- "chmod +x ~/SoftwareTestLocation/remote.pinconf.sh\r"
send -- "sudo ./SoftwareTestLocation/remote.pinconf.sh\r"
expect "*\r"
expect "\r"
#Execute the Software
send_user "Remote Output\n---\n---\n---\n"
send -- "sudo ~/SoftwareTestLocation/ADAU145x.bin\r"
expect "*\r"
expect "END"
Please, give me any sugestions to help Discover the reason why the relative path works then I log in directly and execute the software from the folder, but won't work when I ask for execution by the script.
Thanks.
On linux /DSP_DATA/test.xml means to look at the root directory under DSP_DATA folder.
If the path is relative to the executable position you should use: ./DSP_DATA/test.xml

wildcard in expect script doesn't work

I have the following script running successfully. However if I try to use a wildcard, to copy multiple files, it throws an error, saying “No such file or directory”.
This code works:
#!/usr/bin/expect -f
spawn scp file1.txt root#192.168.1.156:/temp1/.
expect "password:"
send "iamroot\r"
expect "*\r"
expect "\r"
The following doesn't work:
#!/usr/bin/expect -f
spawn scp * root#192.168.1.156:/temp/. #fails here
….
The * is usually expanded by the shell (bash), but in this case you shell is expect. I suspect that expect is not expanding the *.
try:
spawn bash -c 'scp * root#192.168.1.156:/temp/.'
explanation:
#!/usr/bin/expect -f
spawn echo *
expect "*"
spawn bash -c 'echo *'
expect "file1 file2…"
AFAIK scp defaults to file copy while bash might expand * to directories also, if any is found in the current path.
Perhaps trying a -r (recursive) could solve your problem (not sure as I can't test the scenario right now)?
Or if you do not want to copy the whole folder structure, you could use scp *.txt ... depending on your needs.

Is there a curl/wget option that prevents saving files in case of http errors?

I want to download a lot of urls in a script but I do not want to save the ones that lead to HTTP errors.
As far as I can tell from the man pages, neither curl or wget provide such functionality.
Does anyone know about another downloader who does?
I think the -f option to curl does what you want:
-f, --fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better
enable scripts etc to better deal with failed attempts. In normal cases when an HTTP
server fails to deliver a document, it returns an HTML document stating so (which often
also describes why and more). This flag will prevent curl from outputting that and
return error 22. [...]
However, if the response was actually a 301 or 302 redirect, that still gets saved, even if its destination would result in an error:
$ curl -fO http://google.com/aoeu
$ cat aoeu
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
To follow the redirect to its dead end, also give the -L option:
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different
location (indicated with a Location: header and a 3XX response code), this option will
make curl redo the request on the new place. [...]
One liner I just setup for this very purpose:
(works only with a single file, might be useful for others)
A=$$; ( wget -q "http://foo.com/pipo.txt" -O $A.d && mv $A.d pipo.txt ) || (rm $A.d; echo "Removing temp file")
This will attempt to download the file from the remote Host. If there is an Error, the file is not kept. In all other cases, it's kept and renamed.
Ancient thread.. landed here looking for a solution... ended up writing some shell code to do it.
if [ `curl -s -w "%{http_code}" --compress -o /tmp/something \
http://example.com/my/url/` = "200" ]; then
echo "yay"; cp /tmp/something /path/to/destination/filename
fi
This will download output to a tmp file, and create/overwrite output file only if status was a 200. My usecase is slightly different.. in my case the output takes > 10 seconds to generate... and I did not want the destination file to remain blank for that duration.
NOTE: I am aware that this is an older question, but I believe I have found a better solution for those using wget than any of the above answers provide.
wget -q $URL 2>/dev/null
Will save the target file to the local directory if and only if the HTTP status code is within the 200 range (Ok).
Additionally, if you wanted to do something like print out an error whenever the request was met with an error, you could check the wget exit code for non-zero values like so:
wget -q $URL 2>/dev/null
if [ $? != 0]; then
echo "There was an error!"
fi
I hope this is helpful to someone out there facing the same issues I was.
Update:
I just put this into a more script-able form for my own project, and thought I'd share:
function dl {
pushd . > /dev/null
cd $(dirname $1)
wget -q $BASE_URL/$1 2> /dev/null
if [ $? != 0 ]; then
echo ">> ERROR could not download file \"$1\"" 1>&2
exit 1
fi
popd > /dev/null
}
I have a workaround to propose, it does download the file but it also removes it if its size is 0 (which happens if a 404 occurs).
wget -O <filename> <url/to/file>
if [[ (du <filename> | cut -f 1) == 0 ]]; then
rm <filename>;
fi;
It works for zsh but you can adapt it for other shells.
But it only saves it in first place if you provide the -O option
As alternative you can create a temporal rotational file:
wget http://example.net/myfile.json -O myfile.json.tmp -t 3 -q && mv list.json.tmp list.json
The previous command will always download the file "myfile.json.tmp" however only when the wget exit status is equal to 0 the file is rotated as "myfile.json".
This solution will prevent to overwrite the final file when a network failure occurs.
The advantage of this method is that in case that something is wrong you can inspect the temporal file and see what error message is returned.
The "-t" parameter attempt to download the file several times in case of error.
The "-q" is the quiet mode and it's important to use with cron because cron will report any output of wget.
The "-O" is the output file path and name.
Remember that for Cron schedules it's very important to provide always the full path for all the files and in this case for the "wget" program it self as well.
You can download the file without saving using "-O -" option as
wget -O - http://jagor.srce.hr/
You can get mor information at http://www.gnu.org/software/wget/manual/wget.html#Advanced-Usage