Redirecting standard output stream - file-io

How do I write the output of listing/0 in SWI-Prolog REPL to a file ?
?- listing > file.txt.

You can open a file for writing and redirect current_ouput to it like this:
?- current_output(Orig), % save current output
open('file.txt', write, Out),
set_output(Out),
listing,
close(Out),
set_output(Orig). % restore current output
Alternatively, SWI-Prolog provides a predicate with_output_to/2 which can be used to redirect the current output for one goal. Make sure to read the documentation, but in short:
?- open('file.txt', write, Out),
with_output_to(Out, listing),
close(Out).
Now the output of listing/0 will be written to file.txt.
But keep in mind that there is going to be a whole lot of stuff in there. You maybe want to use listing/1 for specific predicates? In this case, using clause/2 and portray_clause/2 is another option, especially if you want more control over what and how you are writing to the file. listing is meant only for interactive use I guess.

Related

SCIP - run (nearly) same LP on different instances

I have an LP, formulated in the modelling language Zimpl, that I want to run on many instances, which are in different files.
Additionally, I want to change one parameter in this LP.
For a single call, my file test.zpl looks like this:
param FILE := "file1.dat"
param BOUND := 42
[test_body: Rest of LP]
Now I want to change those two parameters. SCIP has the -c option, to execute some command. But I cannot find by which command to achieve this. All parameter changes I found affect the algorithm, not the data.
The command change to change the problem does not seem to allow new parameters/variables.
In the end, I expect the solution to look something like
scip -c "[set my parameters]; read test_body.zpl; optimize; quit"
How do I set these problem parameters?
I am not aware of any commands that support the modification of model parameters as you wish. However, if you don't hardcode the value of param BOUND in the .zpl file (instead, move the value to the .dat file and use a proper read command in the model), then you could procede as follows:
Make a copy of your data file such that each copy contains a distinct value of param BOUND
Call scip.exe separately with each data file (you could also use a simple batch script)

Bazel Checkers Support

What options do Bazel provide for creating new or extending existing targets that call C/C++-code checkers such as
qac
cppcheck
iwyu
?
Do I need to use a genrule or is there some other target rule for that?
Is https://bazel.build/versions/master/docs/be/extra-actions.html my only viable choice here?
In security critical software industries, such as aviation and automotive, it's very common to use the results of these calls to collect so called "metric reports".
In these cases, calls to such linters must have outputs that are further processed by the build actions of these metric report collectors. In such cases, I cannot find a useful way of reusing Bazel's "extra-actions". Ideas any one?
I've written something which uses extra actions to generate a compile_commands.json file used by clang-tidy and other tools, and I'd like to do the same kind of thing for iwyu when I get around to it. I haven't used those other tools, but I assume they fit the same pattern too.
The basic idea is to run an extra action which generates some output for each file (aka C/C++ compilation command), and then find all the output files afterwards (outside of Bazel) and aggregate them. A reasonably complete example is here for reference. Basically, the action listener (written in Python) decodes the extra action proto and extracts the source files, compiler options, etc:
action = extra_actions_base_pb2.ExtraActionInfo()
with open(argv[1], 'rb') as f:
action.MergeFromString(f.read())
cpp_compile_info = action.Extensions[extra_actions_base_pb2.CppCompileInfo.cpp_compile_info]
compiler = cpp_compile_info.tool
options = ' '.join(cpp_compile_info.compiler_option)
source = cpp_compile_info.source_file
output = cpp_compile_info.output_file
print('%s %s -c %s -o %s' % (compiler, options, source, output))
If you give the extra action an output template, then it can write that output to a file. If you give the output files distinctive names, you can find them all in the output tree and merge them together however you want.
A more sophisticated option is to use bazel query --output=proto and write code to calculate the extra action output filenames of the targets you're interested in from there. That requires writing more code, but you don't have problems with old output files in the output tree that are accidentally included when aggregating.
FWIW, Aspects are another possibility. However, I think extra actions work acceptably for this.

make/0 functionality for SICStus

How can I ensure that all modules (and ideally also all other files that have been loaded or included) are up-to-date? When issuing use_module(mymodule), SICStus compares the modification date of the file mymodule.pl and reloads it, if newer. Also include-ed files will trigger a recompilation. But it does not recheck all modules used by mymodule.
Brief, how can I get similar functionality as SWI offers it with make/0?
There is nothing in SICStus Prolog that provides that kind of functionality.
A big problem is that current Prologs are too dynamic for something like make/0 to work reliably except for very simple cases. With features like term expansion, goals executed during load (including file loading goals, which is common), etc., it is not possible to know how to reliably re-load files. I have not looked closely at it, but presumably make/0 in SWI Prolog has the same problem.
I usually just re-start the Prolog process and load the "main" file again, i.e. a file that loads everything I need.
PS. I was not able to get code formatting in the comments, so I put it here instead: Example why make/0 needs to guard against 'user' as the File from current_module/2:
| ?- [user].
% compiling user...
| :- module(m,[p/0]). p. end_of_file.
% module m imported into user
% compiled user in module m, 0 msec 752 bytes
yes
| ?- current_module(M, F), F==user.
F = user,
M = m ? ;
no
| ?-
So far, I have lived with several hacks:
Up to 0.7 – pre-module times
SICStus always had ensure_loaded/1 of Quintus origin, which was not only a directive (like in ISO), but was also a command. So I wrote my own make-predicate simply enumerating all files:
l :-
ensure_loaded([f1,f2,f3]).
Upon issuing l., only those files that were modified in the meantime were reloaded.
Probably, I could have written this also like — would I have read the meanual (sic):
l :-
\+ ( source_file(F), \+ ensure_loaded(F) ).
3.0 – modules
With modules things changed a bit. On the one hand, there were those files that were loaded manually into a module, like ensure_loaded(module:[f1,f2,f3]), and then those that were clean modules. It turned out, that there is a way to globally ensure that a module is loaded — without interfering with the actual import lists simply by stating use_module(m1, []) which is again a directive and a command. The point is the empty list which caused the module to be rechecked and reloaded but thanks to the empty list that statement could be made everywhere.
In the meantime, I use the following module:
:- module(make,[make/0]).
make :-
\+ ( current_module(_, F), \+ use_module(F, []) ).
This works for all "legal" modules — and as long as the interfaces do not change. What I still dislike is its verboseness: For each checked and unmodified module there is one message line. So I get a page full of such messages when I just want to check that everything is up-to-date. Ideally such messages would only show if something new happens.
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
An improved version takes #PerMildner's remark into account.
Further files can be reloaded, if they are related to exactly one module. In particular, files loading into module user are included like the .sicstusrc. See above link for the full code.
% reload files that are implicitly modules, but that are still simple to reload
\+ (
source_file(F),
F \== user,
\+ current_module(_, F), % not officially declared as a module
setof(M,
P^ExF^ExM^(
source_file(M:P,F),
\+ current_module(M,ExF), % not part of an official module
\+ predicate_property(M:P,multifile),
\+ predicate_property(M:P,imported_from(ExM))
),[M]), % only one module per file, others are too complex
\+ ensure_loaded(M:F)
).
Note that in SWI neither ensure_loaded/1 nor use_module/2 compare file modification dates. So both cannot be used to ensure that the most recent version of a file is loaded.

How do I tell Octave where to find functions without picking up other files?

I've written an octave script, hello.m, which calls subfunc.m, and which takes a single input file, a command line argument, data.txt, which it loads with load(argv(){1}).
If I put all three files in the same directory, and call it like
./hello.m data.txt
then all is well.
But if I've got another data.txt in another directory, and I want to run my script on it, and I call
../helloscript/hello.m data.txt
this fails because hello.m can't find subfunc.m.
If I call
octave --path "../helloscript" ../helloscript/hello.m data.txt
then that seems to work fine.
The problem is that if I don't have a data.txt in the directory, then the script will pick up any data.txt that is lying around in ../helloscript.
This seems a bit fragile. Is there any way to tell octave, preferably in the script itself, to get subfunctions from the same directory as the script, but to get everything else relative to the current directory.
The best robust solution I can think of at the moment is to inline the subfunction in the script, which is a bit nasty.
Is there a good way to do this, or is it just a thorny problem that will cause occasional hard to find problems and can't be avoided?
Is this in fact just a general problem with scripting languages that I've just never noticed before? How does e.g. python deal with it?
It seems like there should be some sort of library-load-path that can be set without altering the data-load-path.
Adding all your subfunctions to your program file is not nasty at all. Why would you think so? It is perfectly normal to have function definitions in your script. The only language I know that does not do this is Matlab but that's just braindead.
The other alternative you have is to check that the input file argument, data.txt exists. Like so:
fpath = argv (){1};
[info, err, msg] = stat (fpath);
if (err)
error ("could not stat `%s' : %s", fpath, msg);
endif
## continue your script knowing the file exists
But really, I would recommend you to use both. Add your subfunctions in your main program, the only reason to have it on separate file is if you plan on sharing with other programs, and always check input arguments.

correct way to write to the same file from multiple processes awk

The title says it all.
I have 4 awk processes logging to the same file, and output seems fine, not mangled, but I'm not sure that just redirecting print output like this: print "xxx" >> file in every process is the right way to do it.
There are many similar questions around the site, but this one is particularly about awk and a pragmatic, code-correct way to approach the problem.
EDIT
Sorry folks, of course I wasn't "just redirecting" like I wrote, I was appending.
No it is not safe.
the awk print "foo" > "file" will open the file and overwrite the file content, till the end of script.
That is, if your 4 awk processes started writing to the same file on different time, they overwrite the result of each other.
To reproduce it, you could start two (or more) awk like this:
awk '{while(++i<9){system("sleep 2");print "p1">"file"}}' <<<"" &
awk '{while(++i<9){system("sleep 2");print "p2">"file"}}' <<<"" &
and same time you monitoring the content of file, you will see finally there are not exactly 8 "p1" and 8 "p2".
using >> could avoid the losing of entries. but the entry sequence from 4 processes could be messed up.
EDIT
Ok, the > was a typo.
I don't know why you really need 4 processes to write into same file. as I said, with >>, the entries won't get lost (if you awk scripts works correctly). however personally I won't do in this way. If I have to have 4 processes, i would write to different files. well I don't know your requirement, just speaking in general.
outputting to different files make the testing, debugging easier.. imagine when one of your processes had problem, you want to solve it. etc...
I think using the operating system print command is save. As in fact this will append the file write buffer with the string you provide as log. So the system will menage the actual writing process of the data to disc, also if another process will want to use the same file the system will see that the resource is already claimed and will wait for 1st thread to finish its processing, than will allow the 2nd process to write to the buffer.