How to check if there exist a sub list of a list in a Map? - kframework

I am stuck in the implementation of a function and also I am not sure if it is the correct way to solve my problem.
Description of my problem
For the context, I want to be able to borrow (a unary operation) a field of a structure only if no references on this field or its parent already exist. Let me clarify my this with the following example. I hope things will become more clear with code.
struct p{ x, p2:{ x, p3: {x} }
let a = ref p
let b = ref p.p2
let c = ref p.p2.p3
Here I have a structure p with nested fields and 3 references: one on p and 2 on its fields.
I use a Map to store the mapping between referred and their ref:
<env>
1 |-> 0 // 1 is a and 0 is p
2 |-> 0.1 // 2 is b and 0.1 is p.p2
3 |-> 0.1.2 // 3 is c and 0.1.2 is p.p2.p3
</env>
So now, If I want to do the unary operator borrow on p.p2.p3.x:
borrow p.p2.p3.x;
This operation should fail because a,b and c exists in my env.
My code
So, I tried to implement this in this snippet:
module TEST-SYNTAX
import DOMAINS
syntax Ref ::= "ref" "{" Path "}"
syntax Path ::= List{Int,","}
syntax Stmts ::= List{Stmt, ";"}
syntax Stmt ::= Ref
| Int "borrow" "{" Path "}"
endmodule
module TEST
import TEST-SYNTAX
configuration <T>
<k>$PGM:Stmts</k>
<env> .Map </env>
</T>
rule S:Stmt ; Ss:Stmts => S ~> Ss
rule .Stmts => .
rule <k> ref { P:Path } => . ... </k>
<env> Rho:Map => Rho[!I:Int <- P] </env>
syntax Bool::= #checkborrow(Int, List, Path) [function]
syntax List ::= #pathToSubPaths(Path, List) [function]
rule <k> N:Int borrow { P:Path } => #checkborrow(N, #pathToSubPaths(P, .List), P) ... </k>
rule #pathToSubPaths(.Path, S:List) => S
endmodule
I am stuck on how I can implement the #checkborrow function. My idea is to first generate all the sub path of a given paths, for example:
#pathToSubPath(p.p2.p3.x) => { {p} , { p.p2 }, { p.p2.p3 }, { p.p2.p3.x } }
After, make a projection function on env to see if the element exist or not:
#refForSubPathsExist(SubPaths:Set) => {True, True, True, False}
Then reducing this returned Set with a folding OR
#checkborrow({True, True, True, False}) => True
For now, I am stuck in the implementation of #pathToSubPath.
Thank you if you had the courage to read the whole question :). I am unfamiliar with K, so I am looking for help.
NOTE:
We are using this version of K Framework: https://github.com/kframework/k/releases/tag/nightly-f5ea5c7

Related

Simple arithmetic parser created with Kotlin and better-parse fails

Please have a look at the following parser I implemented in Kotlin using the parser combinator library better-parse. It parses—should parse, rather—simple arithmetic expressions:
object CalcGrammar: Grammar<Int>() {
val num by regexToken("""\d+""")
val pls by literalToken("+")
val min by literalToken("-")
val mul by literalToken("*")
val div by literalToken("/")
val lpr by literalToken("(")
val rpr by literalToken(")")
val wsp by regexToken("\\s+", ignore = true)
// expr ::= term + expr | term - expr | term
// term ::= fact * term | fact / term | fact
// fact ::= (expr) | -fact | int
val fact: Parser<Int> by
skip(lpr) and parser(::expr) and skip(rpr) or
(skip(min) and parser(::fact) map { -it }) or
(num map { it.text.toInt() })
val term by
leftAssociative(fact, mul) { a, _, b -> a * b } or
leftAssociative(fact, div) { a, _, b -> a / b } or
fact
val expr by
leftAssociative(term, pls) { a, _, b -> a + b } or
leftAssociative(term, min) { a, _, b -> a - b } or
term
override val rootParser by expr
}
However, when I parse -2 + 4 - 5 + 6, I get this ParseException:
Could not parse input: UnparsedRemainder(startsWith=min#8 for "-" at 7 (1:8))
At first, I thought the issue was that I have not codified the self-recursion in the expr productions, i.e., the code does not accurately represent the grammar:
// Reference to `expr` is missing on the right-hand side...
val expr = ... leftAssociative(term, min) ...
// ...although the corresponding production defines it.
// expr ::= ... term - expr ...
But then I noticed that the official example for an arithmetic parser provided as part of the library's documentation—which turns out to be almost identical afaict—also omits this.
What did I do wrong if not this? And how can I make it work?
I am not entirely familiar with this library, but it seems your translation from grammar to code is too literal for the library's syntax and the library actually implicitly handles much of what you are explicitly writing, and it seems that as a part of this "ease of use" it breaks what is seemingly correct code.
For starters, you will find that your code behaves the exact same way whether or not you chain or term on to the end of expr, and or fact on to the end of term.
Based on this I was able to come to the conclusion that these ors are not working as expected when chaining together the leftAssociatives, in fact you would run in to the same problem had you attempted to parse a division. I believe it is for this reason that the example you provided a link to combines addition and subtraction (as with multiplication and division) into a single, more dynamic, leftAssociative call. And if you copy that same work to your own code, it runs flawlessly:
object CalcGrammar: Grammar<Int>() {
val num by regexToken("""\d+""")
val pls by literalToken("+")
val min by literalToken("-")
val mul by literalToken("*")
val div by literalToken("/")
val lpr by literalToken("(")
val rpr by literalToken(")")
val wsp by regexToken("\\s+", ignore = true)
// expr ::= term + expr | term - expr | term
// term ::= fact * term | fact / term | fact
// fact ::= (expr) | -fact | int
val fact: Parser<Int> by
skip(lpr) and parser(::expr) and skip(rpr) or
(skip(min) and parser(::fact) map { -it }) or
(num map { it.text.toInt() })
private val term by
leftAssociative(fact, div or mul use { type }) { a, op, b ->
if (op == div) a / b else a * b
}
private val expr by
leftAssociative(term, pls or min use { type }) { a, op, b ->
if (op == pls) a + b else a - b
}
override val rootParser by expr
}

Difficulties with solving the exercises in the K Framework Tutorial

Tutorial 1 Lesson 8 ( mu-defined )
My attempt is to substitute the existing rule : "rule mu X . E => E[(mu X . E) / X]"
with the rule :
rule mu X:KVar . E:Exp
=> let X =
(lambda $x . (( lambda X . E) (lambda $y . ($x $x $y))))
(lambda $x . (( lambda X . E) (lambda $y . ($x $x $y)))) [macro]"
Unfortanetely it wont compile. What am i doing wrong ? Is this solution correct ?
Tutorial 1 Lesson 8 ( SK-combinators )
They have this part in the README of the exercise:
" For example, lambda x . if x then y else z cannot be transformed into combinators as is,
but it can if we assume a builtin conditional function constant, say cond,
and desugar if_then_else_ to it. Then this expression becomes
lambda x . (((cond x) y) z), which we know how to transform. "
My struggle is with ' say cond, and desugar if_then_else_ to it. '
How do i do this part ? Any help would be appreciated.
Tutorial 2 Lesson 4 (purely-syntactic)
They have this part in the README of the exercise:
'Hint: make sequential composition strict(1) or seqstrict, and have
statements reduce to {} instead of .; and don't forget to make
{} a KResult (you may need a new syntactic category for that, which
only includes {} and is included in KResult).'
My struggle is with the part: ' and have
statements reduce to {} instead of .; and don't forget to make
{} a KResult (you may need a new syntactic category for that, which
only includes {} and is included in KResult)'
How do i do this ? Any hints ?
Tutorial 2 Lesson 4 (uninitialized-variables)
I have created the following module:
module UNDEFINED
rule <k> int (X,Xs => Xs);_ </k> <state> Rho:Map (.Map => X|->"##") </state>
requires notBool (X in keys(Rho))
endmodule
But this initializes variables with "##". How do i initialize a variable with "undefined" ? Any hints ?
Tutorial 3 Lesson 1 (callCC)
For this exercise i created a very similar code similar to the callcc:
syntax Exp ::= "callCC" Exp [strict]
syntax Val ::= cc(K)
rule <k> (callCC V:Val => V cc(K)) ~> _ </k>
rule <k> cc(K) V ~> _ => V ~> _ </k>
For some reason it wont compile . What am i doing wrong ? Is this solution correct ?

How to declare an abstract function in Inox

I'm proving certain properties on elliptic curves and for that I rely on some functions that deal with field operations. However, I don't want Inox to reason about the implementation of these functions but to just assume certain properties on them.
Say for instance I'm proving that the addition of points p1 = (x1,y1) and p2 = (x2,y2) is commutative. For implementing the addition of points I need a function that implements addition over its components (i.e. the elements of a field).
The addition will have the following shape:
val addFunction = mkFunDef(addID)() { case Seq() =>
val args: Seq[ValDef] = Seq("f1" :: F, "f2" :: F)
val retType: Type = F
val body: Seq[Variable] => Expr = { case Seq(f1,f2) =>
//do the addition for this field
}
(args, retType, body)
}
For this function I can state properties such as:
val addAssociative: Expr = forall("x" :: F, "y" :: F, "z" :: F){ case (x, y, z) =>
(x ^+ (y ^+ z)) === ((x ^+ y) ^+ z)
}
where ^+ is just the infix operator corresponding to add as presented in this other question.
What is a proper expression to insert in the body so that Inox does not assume anything on it while unrolling?
There are two ways you can go about this:
Use a choose statement in the body of addFunction:
val body: Seq[Variable] => Expr = {
choose("r" :: F)(_ => E(true))
}
During unrolling, Inox will simply replace the choose with a fresh
variables and assume the specified predicate (in this case true) on
this variable.
Use a first-class function. Instead of using add as a named function,
use a function-typed variables:
val add: Expr = Variable(FreshIdentifier("add"), (F, F) =>: F)
You can then specify your associativity property on add and prove the
relevant theorems.
In your case, it's probably better to go with the second option. The issue with proving things about an addFunction with a choose body is that you can't substitute add with some other function in the theorems you've shown about it. However, since the second option only shows things about a free variable, you can then instantiate your theorems with concrete function implementations.
Your theorem would then look something like:
val thm = forallI("add" :: ((F,F) =>: F)) { add =>
implI(isAssociative(add)) { isAssoc => someProperty }
}
and you can instantiate it through
val isAssocAdd: Theorem = ... /* prove associativity of concreteAdd */
val somePropertyForAdd = implE(
forallE(thm)(\("x" :: F, "y" :: F)((x,y) => E(concreteAdd)()(x, y))),
isAssocAdd
)

Why are the strings in my iterator being concatenated?

My original goal is to fetch a list of words, one on each line, and to put them in a HashSet, while discarding comment lines and raising I/O errors properly. Given the file "stopwords.txt":
a
# this is actually a comment
of
the
this
I managed to make the code compile like this:
fn stopword_set() -> io::Result<HashSet<String>> {
let words = Result::from_iter(
BufReader::new(File::open("stopwords.txt")?)
.lines()
.filter(|r| match r {
&Ok(ref l) => !l.starts_with('#'),
_ => true
}));
Ok(HashSet::from_iter(words))
}
fn main() {
let set = stopword_set().unwrap();
println!("{:?}", set);
assert_eq!(set.len(), 4);
}
Here's a playground that also creates the file above.
I would expect to have a set of 4 strings at the end of the program. To my surprise, the function actually returns a set containing a single string with all words concatenated:
{"aofthethis"}
thread 'main' panicked at 'assertion failed: `(left == right)` (left: `1`, right: `4`)'
Led by a piece of advice in the docs for FromIterator, I got rid of all calls to from_iter and used collect instead (Playground), which has indeed solved the problem.
fn stopword_set() -> io::Result<HashSet<String>> {
BufReader::new(File::open("stopwords.txt")?)
.lines()
.filter(|r| match r {
&Ok(ref l) => !l.starts_with('#'),
_ => true
}).collect()
}
Why are the previous calls to from_iter leading to unexpected inferences, while collect() works just as intended?
A simpler reproduction:
use std::collections::HashSet;
use std::iter::FromIterator;
fn stopword_set() -> Result<HashSet<String>, u8> {
let input: Vec<Result<_, u8>> = vec![Ok("foo".to_string()), Ok("bar".to_string())];
let words = Result::from_iter(input.into_iter());
Ok(HashSet::from_iter(words))
}
fn main() {
let set = stopword_set().unwrap();
println!("{:?}", set);
assert_eq!(set.len(), 2);
}
The problem is that here, we are collecting from the iterator twice. The type of words is Result<_, u8>. However, Result also implements Iterator itself, so when we call from_iter on that at the end, the compiler sees that the Ok type must be String due to the method signature. Working backwards, you can construct a String from an iterator of Strings, so that's what the compiler picks.
Removing the second from_iter would solve it:
fn stopword_set() -> Result<HashSet<String>, u8> {
let input: Vec<Result<_, u8>> = vec![Ok("foo".to_string()), Ok("bar".to_string())];
Result::from_iter(input.into_iter())
}
Or for your original:
fn stopword_set() -> io::Result<HashSet<String>> {
Result::from_iter(
BufReader::new(File::open("stopwords.txt")?)
.lines()
.filter(|r| match r {
&Ok(ref l) => !l.starts_with('#'),
_ => true
}))
}
Of course, I'd normally recommend using collect instead, as I prefer the chaining:
fn stopword_set() -> io::Result<HashSet<String>> {
BufReader::new(File::open("stopwords.txt")?)
.lines()
.filter(|r| match r {
&Ok(ref l) => !l.starts_with('#'),
_ => true,
})
.collect()
}

How to use Start States in ML-Lex?

I am creating a tokeniser in ML-Lex a part of the definition of which is
datatype lexresult = STRING
| STRINGOP
| EOF
val error = fn x => TextIO.output(TextIO.stdOut,x ^ "\n")
val eof = fn () => EOF
%%
%structure myLang
digit=[0-9];
ws=[\ \t\n];
str=\"[.*]+\";
strop=\[[0-9...?\^]\];
%s alpha;
alpha=[a-zA-Z];
%%
<alpha> {alphanum}+ => (ID);
. => (error ("myLang: ignoring bad character " ^ yytext); lex());
I want that the type ID should be detected only when it starts with or is found after "alpha". I know that writing it as
{alpha}+ {alphanum}* => (ID);
is an option but I need to learn to use the use of start states as well for some other purposes. Can someone please help me on this?
The information you need is in the documentation which comes with SML available in various places. Many university courses have online notes which contain working examples.
The first thing to note from your example code is that you have overloaded the name alpha and used it to name a state and a pattern. This is probably not a good idea. The pattern alphanum is not not defined, and the result ID is not declared. Some basic errors which you should probably fix before thinking about using states - or posting a question here on SO. Asking for help for code with such obvious faults in it is not encouraging help from the experts. :-)
Having fixed up those errors, we can start using states. Here is my version of your code:
datatype lexresult = ID
| EOF
val error = fn x => TextIO.output(TextIO.stdOut,x ^ "\n")
val eof = fn () => EOF
%%
%structure myLang
digit=[0-9];
ws=[\ \t\n];
str=\"[.*]+\";
strop=\[[0-9...?\^]\];
%s ALPHA_STATE;
alpha=[a-zA-Z];
alphanum=[a-zA-Z0-9];
%%
<INITIAL>{alpha} => (YYBEGIN ALPHA_STATE; continue());
<ALPHA_STATE>{alphanum}+ => (YYBEGIN INITIAL; TextIO.output(TextIO.stdOut,"ID\n"); ID);
. => (error ("myLang: ignoring bad character " ^ yytext); lex());
You can see I've added ID to the lexresult, named the state ALPHA_STATE and added the alphanum pattern. Now lets look at how the state code works:
There are two states in this program, they are called INITIAL and ALPHA_STATE (all lex programs have an INITIAL default state). It always begins recognising in the INITIAL state. Having a rule <INITIAL>{alpha} => indicates that if you encounter a letter when in the initial state (i.e. NOT in the ALPHA_STATE) then it is a match and the action should be invoked. The action for this rule works as follows:
YYBEGIN ALPHA_STATE; (* Switch from INITIAL state to ALPHA_STATE *)
continue() (* and keep going *)
Now we are in ALPHA_STATE it enables those rules defined for this state, which enable the rule <ALPHA_STATE>{alphanum} =>. The action on this rule switch back to the INITIAL state and record the match.
For a longer example of using states (lex rather than ML-lex) you can see my answer here: Error while parsing comments in lex.
To test this ML-LEX program I referenced this helpful question: building a lexical analyser using ml-lex, and generated the following SML program:
use "states.lex.sml";
open myLang
val lexer =
let
fun input f =
case TextIO.inputLine f of
SOME s => s
| NONE => raise Fail "Implement proper error handling."
in
myLang.makeLexer (fn (n:int) => input TextIO.stdIn)
end
val nextToken = lexer();
and just for completeness, it generated the following output demonstrating the match:
c:\Users\Brian>"%SMLNJ_HOME%\bin\sml" main.sml
Standard ML of New Jersey v110.78 [built: Sun Dec 21 15:52:08 2014]
[opening main.sml]
[opening states.lex.sml]
[autoloading]
[library $SMLNJ-BASIS/basis.cm is stable]
[autoloading done]
structure myLang :
sig
structure UserDeclarations : <sig>
exception LexError
structure Internal : <sig>
val makeLexer : (int -> string) -> unit -> Internal.result
end
val it = () : unit
hello
ID