ActiveTCL - Unable to run a batch file from an Expect Script - scripting

I was originally trying to run an executable (tftpd32.exe) from Expect with the following command, but for some unknown reason it would hanged the entire script:
exec c:/tftpd32.351/tftpd32.exe
So, decided to call a batch file that will start the executable.
I tried to call the batch file with the following command, but get an error message stating windows cannot find the file.
exec c:/tftpd32.351/start_tftp.bat
I also tried the following, but it does not start the executable:
spwan cmd.exe /c c:/tftpd32.351/start_tftp.bat
The batch file contains this and it run ok when I double click on it:
start tftpd32.exe
Any help would be very much appreciated.
Thanks

The right way to run that program from Tcl is to do:
set tftpd "c:/tftpd32.351/tftpd32.exe"
exec {*}[auto_execok start] "" [file nativename $tftpd]
Note that you should always have that extra empty argument when using start (due to the weird way that start works; it has an optional string in quotes that specifies the window title to create, but it tends to misinterpret the first quoted string as that even if that leaves it with no mandatory arguments) and you need to use the native system name of the executable to run, hence the file nativename.
If you've got an older version of Tcl inside your expect program (8.4 or before) you'd do this instead:
set tftpd "c:/tftpd32.351/tftpd32.exe"
eval exec [auto_execok start] [list "" [file nativename $tftpd]]
The list command in that weird eval exec construction adds some necessary quoting that you'd have trouble generating otherwise. Use it exactly as above or you'll get very strange errors. (Or upgrade to something where you don't need nearly as much code gymnastics; the {*} syntax was added for a good reason!)

Related

Executing additional command in Backend that takes the to be generated file

I'm currently looking for a way to execute iverilog in in Yosys, to be more exact at the write_verilog step.
I need to feed iverilog the file, which will be generated by write_verilog (reason is, I need to uphold the variable source information, which are kept in the yosys attributes).
However the execute() function only writes into the file upon function end.
If I were to call iverlog testbench.v design.v with design.v being the file which is generated through write_verilog, I get an error, telling me it's missing modules.
Is it possible to carry out commands, that depend on the file which is generated after execute() has run through, while still being in the verilog backend?
You could use a script instead, to run iverilog after write_verilog, inside a Yosys script a line beginning ! is passed to the shell:
write_verilog design.v
!iverilog testbench.v design.v

TCL: how to execute program using enviorment PATH variable

I've got following line in my script
exec $::env(PATH)/program.exe
In my env PATH variable I've got a directory where I've got this executable file. For example:
PATH env variable got among other this - D:\my_program\bin
I've got error:
Error:
couldn't execute C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;D:\my_program\bin;\program": no such file or directory
Any suggest how to execute .exe file using system variable like PATH in tcl?
Thanks
PS
OK, when I've create a new env variable (PATH1 - without any other paths, just one) and set .exe file path to it, it seems to work. Any solution to do with PATH (with multiple paths) excluding set D:\my_program\bin in first place?
You should simply use the Tcl library function made for this auto_execok.
Try this:
exec {*}[auto_execok program.exe]
It automatically searches the PATH and constructs the right path for using with exec.
For example, to start notepad.exe:
% auto_execok notepad.exe
C:/windows/system32/notepad.exe
% exec {*}[auto_execok notepad.exe]
To see why the {*} is needed, have a look at http://wiki.tcl.tk/765. Basically auto_execok is pretty smart and can return a list, if needed, e.g. for running start on windows, which needs the expansion to work properly with exec.

How to force STORE (overwrite) to HDFS in Pig?

When developing Pig scripts that use the STORE command I have to delete the output directory for every run or the script stops and offers:
2012-06-19 19:22:49,680 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 6000: Output Location Validation Failed for: 'hdfs://[server]/user/[user]/foo/bar More info to follow:
Output directory hdfs://[server]/user/[user]/foo/bar already exists
So I'm searching for an in-Pig solution to automatically remove the directory, also one that doesn't choke if the directory is non-existent at call time.
In the Pig Latin Reference I found the shell command invoker fs. Unfortunately the Pig script breaks whenever anything produces an error. So I can't use
fs -rmr foo/bar
(i. e. remove recursively) since it breaks if the directory doesn't exist. For a moment I thought I may use
fs -test -e foo/bar
which is a test and shouldn't break or so I thought. However, Pig again interpretes test's return code on a non-existing directory as a failure code and breaks.
There is a JIRA ticket for the Pig project addressing my problem and suggesting an optional parameter OVERWRITE or FORCE_WRITE for the STORE command. Anyway, I'm using Pig 0.8.1 out of necessity and there is no such parameter.
At last I found a solution on grokbase. Since finding the solution took too long I will reproduce it here and add to it.
Suppose you want to store your output using the statement
STORE Relation INTO 'foo/bar';
Then, in order to delete the directory, you can call at the start of the script
rmf foo/bar
No ";" or quotations required since it is a shell command.
I cannot reproduce it now but at some point in time I got an error message (something about missing files) where I can only assume that rmf interfered with map/reduce. So I recommend putting the call before any relation declaration. After SETs, REGISTERs and defaults should be fine.
Example:
SET mapred.fairscheduler.pool 'inhouse';
REGISTER /usr/lib/pig/contrib/piggybank/java/piggybank.jar;
%default name 'foobar'
rmf foo/bar
Rel = LOAD 'something.tsv';
STORE Rel INTO 'foo/bar';
Once you use the fs command, there a lot of ways to do this. For an individual file, I wound up adding this to the beginning of my scripts:
-- Delete file (won't work for output, which will be a directory
-- but will work for a file that gets copied or moved during the
-- the script.)
fs -touchz top_100
rm top_100
For a directory
-- Delete dir
fs -rm -r out

How can I stop sourcing a (t)csh script on a certain condition without exiting the shell?

I have to source a tcsh script to modify environment variables.
Some tests are to be done and if any fails the sourcing shall stop without exiting the shell. I do not want to run the script as a subprocess because I would need to modify env variables in the parent process which a subprocess cannot do. This is similar but different to this question where the author actually can run the script as a subprocess.
The usual workaround is to create an alias which runs a script (csh/bash/perl/python/...) which writes a tempfile with all the env var settings and at the end sources & deletes that tempfile. Here's more info for those interested (demoing a solution for bash). For my very simple and short stuff I'm doing that additional alias is not wanted.
So my workaround is to provoke a syntax error which stops any source execution. Here's an example:
test $ADMIN_USER = `filetest -U: $SOME_FILE` || "Error: Admin user must own admin file"
The shortcircuit || causes the error text to be ignored in case of goodness. On a test failure the error text is interpreted as a command, not found, the source stops and produces a reasonable error message:
Error: Admin user must own admin file: Command not found.
Is there any nicer way in doing this? Some csh/tcsh built-in that I've overlooked?
Thanks to a discussion with the user shellter I just verified my assumption that
test $ADMIN_USER = `filetest -U: $SOME_FILE` || \
echo "Error: Admin user must own admin file" && \
exit
would actually quit the enclosing interactive shell. But it does not.
So the answer to my above question actually is:
Just use a normal exit and the source will stop sourcing the script while keeping the calling interactive shell running.

limitations of #! in scripts

It seems as if a script with #! prefix can have the interpreter name and ONLY one argument. Thus:
#!/bin/ls -l
works, but
#!/usr/bin/env ls -l
doesn't
Do you agree? Any thoughts?
Francesc
Different Unixes interpret #! differently. Here's a comprehensive-looking writeup: http://www.in-ulm.de/~mascheck/various/shebang/
It seems that the lowest common denominator across platforms is "the interpreter (which must not itself be a script) and no more than one argument".
Originally, we only had one shell on Unix. When you asked to run a command, the shell would attempt to invoke one of the exec() system calls on it. It the command was an executable, the exec would succeed and the command would run. If the exec() failed, the shell would not give up, instead it would try to interpret the command file as if it were a shell script.
Then unix got more shells and the situation became confused. Most folks would write scripts in one shell and type commands in another. And each shell had differing rules for feeding scripts to an interpreter.
This is when the “#! /” trick was invented. The idea was to let the kernel’s exec () system calls succeed with shell scripts. When the kernel tries to exec () a file, it looks at the first 4 bytes which represent an integer called a magic number. This tells the kernel if it should try to run the file or not. So “#! /” was added to magic numbers that the kernel knows and it was extended to actually be able to run shell scripts by itself. But some people could not type “#! /”, they kept leaving the space out. So the kernel was expended a bit again to allow “#!/” to work as a special 3 byte magic number.