wildcard in expect script doesn't work - testing

I have the following script running successfully. However if I try to use a wildcard, to copy multiple files, it throws an error, saying “No such file or directory”.
This code works:
#!/usr/bin/expect -f
spawn scp file1.txt root#192.168.1.156:/temp1/.
expect "password:"
send "iamroot\r"
expect "*\r"
expect "\r"
The following doesn't work:
#!/usr/bin/expect -f
spawn scp * root#192.168.1.156:/temp/. #fails here
….

The * is usually expanded by the shell (bash), but in this case you shell is expect. I suspect that expect is not expanding the *.
try:
spawn bash -c 'scp * root#192.168.1.156:/temp/.'
explanation:
#!/usr/bin/expect -f
spawn echo *
expect "*"
spawn bash -c 'echo *'
expect "file1 file2…"

AFAIK scp defaults to file copy while bash might expand * to directories also, if any is found in the current path.
Perhaps trying a -r (recursive) could solve your problem (not sure as I can't test the scenario right now)?
Or if you do not want to copy the whole folder structure, you could use scp *.txt ... depending on your needs.

Related

Expect: How to use "ls -d filename" for full path name in expect?

I am trying to automate loading of image on the hardware using expect. For that I need to get full path of the image.
I am using the following syntax -
spawn ls -d $env(PWD)/build/image/bmxs.*bin
expect -re {(\S+)(\r)}
set imgpath $expect_out(1,string)
The message I get is -
spawn: returns {51875}
expect: does "" (spawn_id exp4) match regular expression "(\S+)(\r)"? no
So, it appears that the spawn does not return anything.
I've tried various syntaxes, but no use -
send "ls -d $env(PWD)/build/images/final/nxos.*bin\r"
spawn "ls -d $env(PWD)/build/image/bmxs.*bin"
puts "$LS" ### where $LS is the command.
None of these work. Am I making a mistake?
Your code suggests that
the image file is local - on same machine where you run Expect
You want the first file which matches the pattern
If this is so, you can just do
set files [glob $env(PWD)/build/image/bmxs.*bin]
set imgpath [lindex [lsort $files] 0]
On the other hand, the phrase "on the hardware" suggests that this image file is on some remote system. If so, and you already have spawned a login session there, you need to send the ls command on the existing session and then expect the output from ls. However in that case it looks strange to get the directory from $env(PWD)" as this will read the environment variable PWD on your local machine.

Relative path do not work by script

This is my firt post at stackoverflow.
I'm currently using xmlstartlet, with popen, to parse a XML file and return some results to me.
I want to organize the "changeable" files (xml's) inside a subfolder of my project, so I did the following:
fp = popen("xmlstarlet sel -t -m '//Program/Data' -v . -n < /DSP_DATA/test.xml", "r");
The issue is: I'm using a script to load the program and some configurations to my embedded system (headless), and when I excecute the program directly by ssh, it runs great, showing all the outputs, but when I run it by the script, it shows:
sh: 1: cannot open /DSP_DATA/test.xml: No such file
Below, there is the script used to load the excecutable:
#This Script will upload the excecutable at "Debug" Folder to the remote host and excecute it at terminal by SSH.
set REMOTE_USER "pi"
set REMOTE_IP "192.168.1.99"
#Upload Pin Configuration Script file
spawn scp -r remote.pinconf.sh $REMOTE_USER#$REMOTE_IP:/home/pi/SoftwareTestLocation
expect "password:"
send "raspberry\r"
expect "*\r"
expect "\r"
#Upload the Software
spawn scp -r ../Debug/ADAU145x.bin $REMOTE_USER#$REMOTE_IP:/home/pi/SoftwareTestLocation
expect "password:"
send "raspberry\r"
expect "*\r"
expect "\r"
#Excecute Pin Configuration Script - perform an CHMOD before
spawn ssh $REMOTE_USER#$REMOTE_IP
expect "password:"
send "raspberry\r"
expect "*\r"
expect "\r"
send -- "chmod +x ~/SoftwareTestLocation/remote.pinconf.sh\r"
send -- "sudo ./SoftwareTestLocation/remote.pinconf.sh\r"
expect "*\r"
expect "\r"
#Execute the Software
send_user "Remote Output\n---\n---\n---\n"
send -- "sudo ~/SoftwareTestLocation/ADAU145x.bin\r"
expect "*\r"
expect "END"
Please, give me any sugestions to help Discover the reason why the relative path works then I log in directly and execute the software from the folder, but won't work when I ask for execution by the script.
Thanks.
On linux /DSP_DATA/test.xml means to look at the root directory under DSP_DATA folder.
If the path is relative to the executable position you should use: ./DSP_DATA/test.xml

Run RapSearch-Program with Torque PBS and qsub

My problem is that I have a cluster-server with Torque PBS and want to use it to run a sequence-comparison with the program rapsearch.
The normal RapSearch command is:
./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32
Now I want to run it with 2 nodes on the cluster-server.
I've tried with: echo "./rapsearch -q protein.fasta -d database -o output -e 0.001 -v 10 -x t -z 32" | qsub -l nodes=2 but nothing happened.
Do you have any suggestions? Where I'm wrong? Help please.
Standard output (and error output) files are placed in your home directory by default; take a look. You are looking for a file named STDIN.e[numbers], it will contain the error message.
However, I see that you're using ./rapsearch but are not really being explicit about what directory you're in. Your problem is therefore probably a matter of changing directory into the directory that you submitted from. When your terminal is in the directory of the rapsearch executable, try echo "cd \$PBS_O_WORKDIR && ./rapsearch [arguments]" | qsub [arguments] to submit your job to the cluster.
Other tips:
You could add rapsearch to your path if you use it often. Then you can use it like a regular command anywhere. It's a matter of adding the line export PATH=/full/path/to/rapsearch/bin:$PATH to your .bashrc file.
Create a submission script for use with qsub. Here is a good example.

Unable to run a postgresql script from bash

I am learning the shell language. I have creating a shell script whose function is to login into the DB and run a .sql file. Following are the contents of the script -
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
echo "Running SQL Dump - auto_qa_db_sync"
\\i auto_qa_db_sync.sql
After running the above script, I get the following error
./autoqa_script.sh: 39: ./autoqa_script.sh: /i: not found
Following one article, I tried reversing the slash but it didn't worked.
I don't understand why this is happening. Because when I try manually running the sql file, it works properly. Can anyone help?
#!/bin/bash
set -x
echo "Login to postgres user for autoqa_rpt_production and run script"
$DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT -f auto_qa_db_sync.sql
The lines you put in a shell script are (moreless, let's say so for now) equivalent to what you would put right to the Bash prompt (the one ending with '$' or '#' if you're a root). When you execute a script (a list of commands), one command will be run after the previous terminates.
What you wanted to do is to run the client and issue a "\i ./autoqa_script.sh" comand in it.
What you did was to run the client, and after the client terminated, issue that command in Bash.
You should read about Bash pipelines - these are the way to run programs and input text inside them. Following your original idea to solving the problem, you'd write something like:
echo '\i auto_qa_db_sync.sql' | $DB_PATH -U $POSTGRESS_USER $Auto_rpt_production$TARGET_DB -p $TARGET_PORT
Hope that helps to understand.

Is there a curl/wget option that prevents saving files in case of http errors?

I want to download a lot of urls in a script but I do not want to save the ones that lead to HTTP errors.
As far as I can tell from the man pages, neither curl or wget provide such functionality.
Does anyone know about another downloader who does?
I think the -f option to curl does what you want:
-f, --fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better
enable scripts etc to better deal with failed attempts. In normal cases when an HTTP
server fails to deliver a document, it returns an HTML document stating so (which often
also describes why and more). This flag will prevent curl from outputting that and
return error 22. [...]
However, if the response was actually a 301 or 302 redirect, that still gets saved, even if its destination would result in an error:
$ curl -fO http://google.com/aoeu
$ cat aoeu
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
To follow the redirect to its dead end, also give the -L option:
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different
location (indicated with a Location: header and a 3XX response code), this option will
make curl redo the request on the new place. [...]
One liner I just setup for this very purpose:
(works only with a single file, might be useful for others)
A=$$; ( wget -q "http://foo.com/pipo.txt" -O $A.d && mv $A.d pipo.txt ) || (rm $A.d; echo "Removing temp file")
This will attempt to download the file from the remote Host. If there is an Error, the file is not kept. In all other cases, it's kept and renamed.
Ancient thread.. landed here looking for a solution... ended up writing some shell code to do it.
if [ `curl -s -w "%{http_code}" --compress -o /tmp/something \
http://example.com/my/url/` = "200" ]; then
echo "yay"; cp /tmp/something /path/to/destination/filename
fi
This will download output to a tmp file, and create/overwrite output file only if status was a 200. My usecase is slightly different.. in my case the output takes > 10 seconds to generate... and I did not want the destination file to remain blank for that duration.
NOTE: I am aware that this is an older question, but I believe I have found a better solution for those using wget than any of the above answers provide.
wget -q $URL 2>/dev/null
Will save the target file to the local directory if and only if the HTTP status code is within the 200 range (Ok).
Additionally, if you wanted to do something like print out an error whenever the request was met with an error, you could check the wget exit code for non-zero values like so:
wget -q $URL 2>/dev/null
if [ $? != 0]; then
echo "There was an error!"
fi
I hope this is helpful to someone out there facing the same issues I was.
Update:
I just put this into a more script-able form for my own project, and thought I'd share:
function dl {
pushd . > /dev/null
cd $(dirname $1)
wget -q $BASE_URL/$1 2> /dev/null
if [ $? != 0 ]; then
echo ">> ERROR could not download file \"$1\"" 1>&2
exit 1
fi
popd > /dev/null
}
I have a workaround to propose, it does download the file but it also removes it if its size is 0 (which happens if a 404 occurs).
wget -O <filename> <url/to/file>
if [[ (du <filename> | cut -f 1) == 0 ]]; then
rm <filename>;
fi;
It works for zsh but you can adapt it for other shells.
But it only saves it in first place if you provide the -O option
As alternative you can create a temporal rotational file:
wget http://example.net/myfile.json -O myfile.json.tmp -t 3 -q && mv list.json.tmp list.json
The previous command will always download the file "myfile.json.tmp" however only when the wget exit status is equal to 0 the file is rotated as "myfile.json".
This solution will prevent to overwrite the final file when a network failure occurs.
The advantage of this method is that in case that something is wrong you can inspect the temporal file and see what error message is returned.
The "-t" parameter attempt to download the file several times in case of error.
The "-q" is the quiet mode and it's important to use with cron because cron will report any output of wget.
The "-O" is the output file path and name.
Remember that for Cron schedules it's very important to provide always the full path for all the files and in this case for the "wget" program it self as well.
You can download the file without saving using "-O -" option as
wget -O - http://jagor.srce.hr/
You can get mor information at http://www.gnu.org/software/wget/manual/wget.html#Advanced-Usage