make/0 functionality for SICStus - module

How can I ensure that all modules (and ideally also all other files that have been loaded or included) are up-to-date? When issuing use_module(mymodule), SICStus compares the modification date of the file mymodule.pl and reloads it, if newer. Also include-ed files will trigger a recompilation. But it does not recheck all modules used by mymodule.
Brief, how can I get similar functionality as SWI offers it with make/0?

There is nothing in SICStus Prolog that provides that kind of functionality.
A big problem is that current Prologs are too dynamic for something like make/0 to work reliably except for very simple cases. With features like term expansion, goals executed during load (including file loading goals, which is common), etc., it is not possible to know how to reliably re-load files. I have not looked closely at it, but presumably make/0 in SWI Prolog has the same problem.
I usually just re-start the Prolog process and load the "main" file again, i.e. a file that loads everything I need.
PS. I was not able to get code formatting in the comments, so I put it here instead: Example why make/0 needs to guard against 'user' as the File from current_module/2:
| ?- [user].
% compiling user...
| :- module(m,[p/0]). p. end_of_file.
% module m imported into user
% compiled user in module m, 0 msec 752 bytes
yes
| ?- current_module(M, F), F==user.
F = user,
M = m ? ;
no
| ?-

So far, I have lived with several hacks:
Up to 0.7 – pre-module times
SICStus always had ensure_loaded/1 of Quintus origin, which was not only a directive (like in ISO), but was also a command. So I wrote my own make-predicate simply enumerating all files:
l :-
ensure_loaded([f1,f2,f3]).
Upon issuing l., only those files that were modified in the meantime were reloaded.
Probably, I could have written this also like — would I have read the meanual (sic):
l :-
\+ ( source_file(F), \+ ensure_loaded(F) ).
3.0 – modules
With modules things changed a bit. On the one hand, there were those files that were loaded manually into a module, like ensure_loaded(module:[f1,f2,f3]), and then those that were clean modules. It turned out, that there is a way to globally ensure that a module is loaded — without interfering with the actual import lists simply by stating use_module(m1, []) which is again a directive and a command. The point is the empty list which caused the module to be rechecked and reloaded but thanks to the empty list that statement could be made everywhere.
In the meantime, I use the following module:
:- module(make,[make/0]).
make :-
\+ ( current_module(_, F), \+ use_module(F, []) ).
This works for all "legal" modules — and as long as the interfaces do not change. What I still dislike is its verboseness: For each checked and unmodified module there is one message line. So I get a page full of such messages when I just want to check that everything is up-to-date. Ideally such messages would only show if something new happens.
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
An improved version takes #PerMildner's remark into account.
Further files can be reloaded, if they are related to exactly one module. In particular, files loading into module user are included like the .sicstusrc. See above link for the full code.
% reload files that are implicitly modules, but that are still simple to reload
\+ (
source_file(F),
F \== user,
\+ current_module(_, F), % not officially declared as a module
setof(M,
P^ExF^ExM^(
source_file(M:P,F),
\+ current_module(M,ExF), % not part of an official module
\+ predicate_property(M:P,multifile),
\+ predicate_property(M:P,imported_from(ExM))
),[M]), % only one module per file, others are too complex
\+ ensure_loaded(M:F)
).
Note that in SWI neither ensure_loaded/1 nor use_module/2 compare file modification dates. So both cannot be used to ensure that the most recent version of a file is loaded.

Related

stat function for perl6

Is there an alternate way in perl6 to get file attribute details like size, access_time, modified_time.. etc. without having to invoke native call?
As per the doc it is "unlikely to be implemented as a built in as its POSIX specific".
What workaround options are available excluding the system call to stat?
Any ideas or pointers are much appreciated.
Thanks.
See the IO::Path doc.
For example:
say 'foo'.IO.s; # 3 if 'foo' is an existing file of size 3 bytes
.IO on a string creates an IO::Path object corresponding to the filesystem entry corresponding to the path given by the string.
See examples of using junctions to get multiple attributes at the same time at the doc on ACCEPTS.
I'm not sure if the following is too much. Ignore it if it is. Hopefully it's helpful.
You can discover/explore some of what's available in Perl 6 via its HOW objects (aka Higher Order Workings objects, How Objects Work objects, metaobjects -- whatever you want to call them) which know HOW objects of a particular type work.
say IO::Path.^methods
displays:
(BUILD new is-absolute is-relative parts volume dirname basename extension
Numeric sibling succ pred open watch absolute relative cleanup resolve
parent child add chdir rename copy move chmod unlink symlink link mkdir
rmdir dir slurp spurt lines comb split words e d f s l r w rw x rwx z
modified accessed changed mode ACCEPTS Str gist perl IO SPEC CWD path BUILDALL)
Those are some of the methods available on an IO::Path object.
(You can get more or less with adverbs, eg. say IO::Path.^methods(:all), but the default display aims at giving you the ones you're likely most interested in. The up arrow (^) means the method call (.methods) is not sent to the invocant but rather is sent "upwards", up to its HOW object as explained above.)
Here's an example of applying some of them one at a time:
spurt 'foo', 'bar'; # write a three letter string to a file called 'foo'.
for <e d f s l r w rw x rwx z modified accessed changed mode>
-> $method { say 'foo'.IO."$method"() }
The second line does a for loop over the methods listed by their string names in the <...> construct. To call a method on an invocant given it's name in a variable $qux, write ."$qux"(...).
While looking for an answer to this question in 2021, there is the File::Stat module. It provides some additional stat(2) information such as UID, GID and mode.
#!/usr/bin/env raku
use File::Stat <stat>;
say File::Stat.new(path => $?FILE).mode.base(8);
say stat($?FILE).uid;
say stat($?FILE).gid;

Bazel Checkers Support

What options do Bazel provide for creating new or extending existing targets that call C/C++-code checkers such as
qac
cppcheck
iwyu
?
Do I need to use a genrule or is there some other target rule for that?
Is https://bazel.build/versions/master/docs/be/extra-actions.html my only viable choice here?
In security critical software industries, such as aviation and automotive, it's very common to use the results of these calls to collect so called "metric reports".
In these cases, calls to such linters must have outputs that are further processed by the build actions of these metric report collectors. In such cases, I cannot find a useful way of reusing Bazel's "extra-actions". Ideas any one?
I've written something which uses extra actions to generate a compile_commands.json file used by clang-tidy and other tools, and I'd like to do the same kind of thing for iwyu when I get around to it. I haven't used those other tools, but I assume they fit the same pattern too.
The basic idea is to run an extra action which generates some output for each file (aka C/C++ compilation command), and then find all the output files afterwards (outside of Bazel) and aggregate them. A reasonably complete example is here for reference. Basically, the action listener (written in Python) decodes the extra action proto and extracts the source files, compiler options, etc:
action = extra_actions_base_pb2.ExtraActionInfo()
with open(argv[1], 'rb') as f:
action.MergeFromString(f.read())
cpp_compile_info = action.Extensions[extra_actions_base_pb2.CppCompileInfo.cpp_compile_info]
compiler = cpp_compile_info.tool
options = ' '.join(cpp_compile_info.compiler_option)
source = cpp_compile_info.source_file
output = cpp_compile_info.output_file
print('%s %s -c %s -o %s' % (compiler, options, source, output))
If you give the extra action an output template, then it can write that output to a file. If you give the output files distinctive names, you can find them all in the output tree and merge them together however you want.
A more sophisticated option is to use bazel query --output=proto and write code to calculate the extra action output filenames of the targets you're interested in from there. That requires writing more code, but you don't have problems with old output files in the output tree that are accidentally included when aggregating.
FWIW, Aspects are another possibility. However, I think extra actions work acceptably for this.

Redirecting standard output stream

How do I write the output of listing/0 in SWI-Prolog REPL to a file ?
?- listing > file.txt.
You can open a file for writing and redirect current_ouput to it like this:
?- current_output(Orig), % save current output
open('file.txt', write, Out),
set_output(Out),
listing,
close(Out),
set_output(Orig). % restore current output
Alternatively, SWI-Prolog provides a predicate with_output_to/2 which can be used to redirect the current output for one goal. Make sure to read the documentation, but in short:
?- open('file.txt', write, Out),
with_output_to(Out, listing),
close(Out).
Now the output of listing/0 will be written to file.txt.
But keep in mind that there is going to be a whole lot of stuff in there. You maybe want to use listing/1 for specific predicates? In this case, using clause/2 and portray_clause/2 is another option, especially if you want more control over what and how you are writing to the file. listing is meant only for interactive use I guess.

How to write particular function to file from SBCL interpreter?

Say I've played a bit with SBCL with no SLIME, no whatsoever, plain interpreter. Now I want to save couple of functions in a file. Not an core image, just a bits of code in a text form. How should I do that?
There are two ways to do that: use DRIBBLE and/or FUNCTION-LAMBDA-EXPRESSION
The first is to always use the Common Lisp function DRIBBLE before experimenting:
rjmba:tmp joswig$ sbcl
This is SBCL 1.1.9, an implementation of ANSI Common Lisp.
More information about SBCL is available at <http://www.sbcl.org/>.
SBCL is free software, provided as is, with absolutely no warranty.
It is mostly in the public domain; some portions are provided under
BSD-style licenses. See the CREDITS and COPYING files in the
distribution for more information.
Dribble takes a pathname for a text file. Once called, the interactive IO will be written to that file.
* (dribble "/Lisp/tmp/2013-09-06-01.text")
* (defun foo (a) (1+ a))
FOO
* (foo 10)
11
* (quit)
See the file:
rjmba:tmp joswig$ cat 2013-09-06-01.text
* (defun foo (a) (1+ a))
FOO
* (foo 10)
11
* (quit)
From above you should be able to see if you have any interesting functions entered...
You could also set your SBCL (for example using the init file) to set up dribble always at start. Calling (dribble) without arguments ends a dribble.
Next: FUNCTION-LAMBDA-EXPRESSION:
* (defun foo (b) (1- b))
FOO
Now you can call FUNCTION-LAMBDA-EXPRESSION to get the definition back. It might be slightly altered, but it should do the job to recover valuable ideas written as code:
* (function-lambda-expression #'foo)
(SB-INT:NAMED-LAMBDA FOO
(B)
(BLOCK FOO (1- B)))
NIL
FOO
If you are using sb-readline or rlwrap you press up until you hit when it got defined and copy and paste itto a file. You might have it in the termial window history too.
If none of these works only compiled definitions are available then the only way to save them is by dumping core image.
For next time, you could make a macro that stores every definition source in a special variable so that you can easily retreive them.

Why do I need the shared sources file in smalltalk dialects like Pharo?

the usual installation descriptions tells me that I need to run Pharo at least three files:
image file
changes file
source file (e.g. PharoV10.sources)
I've run Pharo 2 without the sources file and i didn't see any problem. All sources seemed to be avaiable.
So, why do I need the sources file (e.g. PharoV10.sources)?
The image file contains only the compiled code, not the original source code. The changes file contains source code for the stuff that you added to the system yourself but not source code for existing system classes. To get the source code for existing system classes you need the sources file.
Having said that, Smalltalk can decompile the code and produce what looks like source code if the sources file isn't available. This code will be missing proper variable names, comments and spacing. You really don't want to use decompiled source code so you need access to the sources file.
3 possible explanations, try to identify the joke:
the Browser is smart enough to show you a source reconstructed (Decompiled) from the CompiledMethod byte codes. Hint: in this case, you loose all comments
there is a search path for the sources files, and one is found somewhere on your disk
Pharo is changing so fast that every source is now found in the .changes file
For verifying 1., you could try to browse references to Decompiler (there are a bit too many uses to my own taste).
For verifying 2., you could start browsing implementors of #openSourceFiles
For verifying 3., you could evaluate this snippet:
| nSources nChanges |
nSources := nChanges := 0.
SystemNavigation default allBehaviorsDo: [:b |
b selectorsDo: [:s |
(b compiledMethodAt: s) fileIndex = 1
ifTrue: [nSources := nSources+1]
ifFalse: [nChanges := nChanges+1]]].
^{nSources. nChanges}
It is also possible that Pharo downloads PharoV10.sources automatically.