KFramework: Issues with `#isConcrete` and function global configuration lookup - kframework

I am trying to compile a module using the syntax for doing a global environment lookup in a function rule.
When I try to compile my module using the Java backend, I get the error Conversion of KAs unsupported on Java backend! (_0,(I),_1) #as #Configuration, presumably related to the desugaring (though the pending-documentation states "This is completely desugared by the K frontend and does not require any special support in the backend."...but as it is pending I understand it's probably not always entirely accurate).
So I tried the Haskell backend. This also gives me an error about using #isConcrete, which I noticed is documented as not being supported in the Haskell backend...but I wasn't using #isConcrete. However, I discovered that if I remove the cell from my configuration which uses stdout, then I no longer get this error.
Is it possible to both use this syntax and have stdout? Ideally I could keep using the Java backend, because that's where I seem to run into the fewest issues.
Below is a simple example that produces the error:
module TEST-SYNTAX
imports DOMAINS-SYNTAX
syntax Prog ::= run(Int)
endmodule
module TEST
imports TEST-SYNTAX
imports DOMAINS
syntax KResult
configuration
<k> $PGM:Prog </k>
<bar> 6 </bar>
<log stream="stdout"> .List </log>
syntax Int ::= foo(Int) [function]
rule [[ foo(0) => I ]]
<bar> I </bar>
rule run(I) => foo(I)
endmodule
with the program being simply run(0).

Related

Is there a way to add a new constructors to hooked sort Int?

I'm trying to migrate a semantic made with the version: K tool, version 3.6 (yeah...).
We have this rule:
syntax Int
::= #cint(Int,Int)
and when I compile the semantic with the K version: 5.1.16 and the LLVM backend, I get this error:
[Error] Compiler: Cannot add new constructors to hooked sort Int
Is there a way to support this rule with the version 5.1.16 ?
The backends don't support extending the hooked sorts.
But you can use macros to bypass it:
$ cat test.k
module TEST
imports INT
configuration <k> one +Int 2 </k>
syntax Int ::= "one"
rule one => 1 [macro]
endmodule
$ kompile test.k
$ krun
<k>
3 ~> .
</k>
Macros are handled in the front-end, and as long as you process all your constructors that way, you can get away with extending the hooked sorts.

how do I import sv packages using YOSYS

I was wondering how to import sv packages while using YOSYS. For instance
In the file my_pkg.sv I have the following
package my_pkg;
parameter KL=64;
endpackage
Now in the file top.sv I have the following
import my_pkg::*;
module top(
input logic i_clk,
output logic o_done
);
endmodule
Yosys gives the following error:
top.sv:1: ERROR: syntax error, unexpected TOK_ID
I was expecting YOSYS to accept the syntax since I am merely importing the package into the top level file. This is a common way to import all the content of a package inside a module and hence avoid having to prefix the package name every time a package parameter is used inside the module. This works in Modelsim, VCS as well as in DC. Is there a way to accomplish this in Yosys?
Looks like Yosys (Yosys 0.9+1706 git sha1 ff4ca9dd, gcc 8.4.0-1ubuntu1~18.04 -fPIC -Os) does not support top level imports. One possible workaround is to use a tool to convert the SystemVerilog code into verilog and then feed the verilog code into Yosys. One such a tool is sv2v from Zach Snow (kudo to Zach for the hint) at https://github.com/zachjs/sv2v.

make/0 functionality for SICStus

How can I ensure that all modules (and ideally also all other files that have been loaded or included) are up-to-date? When issuing use_module(mymodule), SICStus compares the modification date of the file mymodule.pl and reloads it, if newer. Also include-ed files will trigger a recompilation. But it does not recheck all modules used by mymodule.
Brief, how can I get similar functionality as SWI offers it with make/0?
There is nothing in SICStus Prolog that provides that kind of functionality.
A big problem is that current Prologs are too dynamic for something like make/0 to work reliably except for very simple cases. With features like term expansion, goals executed during load (including file loading goals, which is common), etc., it is not possible to know how to reliably re-load files. I have not looked closely at it, but presumably make/0 in SWI Prolog has the same problem.
I usually just re-start the Prolog process and load the "main" file again, i.e. a file that loads everything I need.
PS. I was not able to get code formatting in the comments, so I put it here instead: Example why make/0 needs to guard against 'user' as the File from current_module/2:
| ?- [user].
% compiling user...
| :- module(m,[p/0]). p. end_of_file.
% module m imported into user
% compiled user in module m, 0 msec 752 bytes
yes
| ?- current_module(M, F), F==user.
F = user,
M = m ? ;
no
| ?-
So far, I have lived with several hacks:
Up to 0.7 – pre-module times
SICStus always had ensure_loaded/1 of Quintus origin, which was not only a directive (like in ISO), but was also a command. So I wrote my own make-predicate simply enumerating all files:
l :-
ensure_loaded([f1,f2,f3]).
Upon issuing l., only those files that were modified in the meantime were reloaded.
Probably, I could have written this also like — would I have read the meanual (sic):
l :-
\+ ( source_file(F), \+ ensure_loaded(F) ).
3.0 – modules
With modules things changed a bit. On the one hand, there were those files that were loaded manually into a module, like ensure_loaded(module:[f1,f2,f3]), and then those that were clean modules. It turned out, that there is a way to globally ensure that a module is loaded — without interfering with the actual import lists simply by stating use_module(m1, []) which is again a directive and a command. The point is the empty list which caused the module to be rechecked and reloaded but thanks to the empty list that statement could be made everywhere.
In the meantime, I use the following module:
:- module(make,[make/0]).
make :-
\+ ( current_module(_, F), \+ use_module(F, []) ).
This works for all "legal" modules — and as long as the interfaces do not change. What I still dislike is its verboseness: For each checked and unmodified module there is one message line. So I get a page full of such messages when I just want to check that everything is up-to-date. Ideally such messages would only show if something new happens.
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
| ?- make.
% module m2 imported into make
% module m1 imported into make
% module SU_messages imported into make
yes
An improved version takes #PerMildner's remark into account.
Further files can be reloaded, if they are related to exactly one module. In particular, files loading into module user are included like the .sicstusrc. See above link for the full code.
% reload files that are implicitly modules, but that are still simple to reload
\+ (
source_file(F),
F \== user,
\+ current_module(_, F), % not officially declared as a module
setof(M,
P^ExF^ExM^(
source_file(M:P,F),
\+ current_module(M,ExF), % not part of an official module
\+ predicate_property(M:P,multifile),
\+ predicate_property(M:P,imported_from(ExM))
),[M]), % only one module per file, others are too complex
\+ ensure_loaded(M:F)
).
Note that in SWI neither ensure_loaded/1 nor use_module/2 compare file modification dates. So both cannot be used to ensure that the most recent version of a file is loaded.

Preprocessors and use association

In summary, is it possible to access via use association a preprocessor directive defined in a Fortran module?
Context
I use preprocessor statements to define subroutines to print warning and error messages. For example, I use the following module/subroutine, in the file errors.f, to print warning messages
module errors
use, intrinsic :: iso_fortran_env, only : error_unit=>stderr
implicit none
contains
!> Print formatted warning message.
subroutine warn_print( file, line, mesg )
implicit none
character(len=*), intent(in) :: file
integer, intent(in) :: line
character(len=*), intent(in) :: mesg
write(stderr,'(a,a,a,i4,a,a)') "WARNING::", file, ":", line, ": ", mesg
end subroutine warn_print
end module errors
and, in a separate file errors.h, I use the above module and define a preprocessor macro
use errors
#define warn( text )warn_print(__FILE__,__LINE__,text)
I then #include the file errors.h in whichever file/module I wish to use the warning print routine which allows me to simply write
call warn("Some warning message")
and the compiler will automatically include the file and line number at which the warning message was called.
Question
The use of #include 'errors.h' is rather idiosyncratic in Fortran code and it hides the use of the errors module. Ideally I would prefer to define the above preprocessor in the errors module itself. However, then when using that module, this preprocessor directive is not available to the program/module which uses this module.
Is there a way to make a preprocessor directive accessible via use association?
The only other way I can think of doing it is to just have the errors module and define the preprocessor directive in my call to the compiler (using, for example, the -D flag with ifort). Any suggestions for any alternative way of achieving the above would be greatly appreciated.
No, it is simply not possible, since the preprocessing and the compilation stages are completely separate one from each other and the C preprocessor does not know anything about the Fortran USE statement.
I use to #include 'config.h' (from autoconf) in most of my .F90 sources, without problems.
This may not be what you are looking for, but if you are using ifort, you can use traceback functionality to achieve something similar (a bit more powerful, but also more ugly), e.g.
program tracetest
call sub(5)
write(*,*) '=== DONE ==='
end program tracetest
subroutine sub(n)
use ifcore
integer :: n
character(len=60) :: str
write(str,*) '=== TROUBLE DETECTED: n =',n ! code -1 means "do not abort"
call tracebackqq(str,-1)
end subroutine sub
Then, compile with -traceback to see the source file, line, and stack trace. The stack trace and line may be obscured because of inlining; to avoid that, you can specify -traceback -O0 to get smth like this:
=== TROUBLE DETECTED: n = 5
Image PC Routine Line Source
a.out 0000000000473D0D Unknown Unknown Unknown
a.out 0000000000472815 Unknown Unknown Unknown
a.out 0000000000423260 Unknown Unknown Unknown
a.out 0000000000404BD6 Unknown Unknown Unknown
a.out 0000000000402C14 sub_ 12 tracetest.f90
a.out 0000000000402B18 MAIN__ 2 tracetest.f90
a.out 0000000000402ADC Unknown Unknown Unknown
libc.so.6 000000323201EC5D Unknown Unknown Unknown
a.out 00000000004029D9 Unknown Unknown Unknown
=== DONE ===
Alternatively, if want to keep the optimizations, and also want to see the correct line (12), you can compile with (for example) -fast -traceback -debug all,inline_debug_info. Something similar may be available in other compilers, but I am not sure.

Get script name in OCaml?

Does OCaml have a way to get the current file/module/script name? Something like:
C/C++'s argv[0]
Python's sys.argv[0]
Perl/Ruby's $0
Erlang's ?FILE
C#'s ProgramName.Environment.CommandLine
Factor's scriptname/script
Go's os.Args[0]
Haskell's System/getProgName
Java's System.getProperty("sun.java.command").split(" ")[0]
Node.js's __filename
etc.
I don't know anything about OCaml but some googling turned up
Sys.argv.(0)
See http://caml.inria.fr/pub/docs/manual-ocaml/manual003.html#toc12
I presume you are scripting in OCaml. Then Sys.argv.(0) is the easiest way to get the script name. Sys module also provides Sys.executable_name, but its semantics is slightly different:
let _ = prerr_endline Sys.executable_name; Array.iter prerr_endline Sys.argv;;
If I run the above line, putting the line in test.ml, by ocaml test.ml hello world, I have:
/usr/local/bin/ocaml - executable_name
test.ml - argv.(0)
hello - argv.(1)
world - argv.(2)
So OCaml toplevel does something fancy against argv for you.
In general, obtaining the current module name in OCaml is not easy, from several reasons:
ML modules are so flexible that they can be aliased, included into other modules, and applied to module functors.
OCaml does not embed the module name into its object file.
One probably possible workaround is to add a variable for the module name by yourself, like:
let ml_source_name = "foobar.ml"
This definition can be probably auto inserted by some pre-processing. However, I am not sure CamlP4 can have the file name of the currently processing source file.
If your main purpose is simple scripting, then of course this pre-processing is too complicated, I am afraid.
let _ =
let program = Sys.argv.(0) in
print_endline ("Program: " ^ program)
And posted to RosettaCode.
In OCaml >= 4.02.0, you can also use __FILE__ to get the filename of the current file, which is similar to Node's __filename and not the same as Sys.argv.(0).