Declarations and global reference variables for several files - variables

My folder contains several files, which are compiled in this order: global.ml, zone.ml, abs.ml, main.ml
global.ml contains some reference variables (e.g. let g1 = ref 0) for all the files.
In zone.ml there is a declaration let f = !g1.
In abs.ml, there is g1 := 5, which will be run by main in the beginning of run-time, I consider it as an initialization of g1 given the real run-time context.
Later main will call Zone.f. Curiously, what I realize is that it takes f = 0 instead of f = 5.
Do you think this behavior is normal? If so, what should I change, to make it take the current value of !g1 into account?
PS: Maybe one solution is to make a function let f v = v in zone.ml then let main call Zone.f !g1. But I have several global reference variables as g1 in global.ml, I hope they could be valid over all the files and functions, and I don't want to get them involved in the signature of a function.

You are basically concerned with the order of evaluation of the top-level values in your modules. The order in which this happens isn't related to the order that you compile the files, but rather the order that they appear when you link the files.
If you ignore the module boundaries, if you link the files in the order you give, what you have is like this:
let g1 = ref 0
let f = !g1
let () = g1 := 5
It shouldn't be surprising that f has the value 0.
Note that your main is not necessarily the first thing that happens at runtime. Top-level values are evaluated in the order the files appear when you link them. Very commonly, main is the last top-level thing to happen (because its file is usually the last one).
(Also note that having a main at all is just a convention, presumably adopted by former C programmers like me. There's no requirement to have a function named main. OCaml just evaluates the top-level values in order.)
Edit:
It's difficult to say how to restructure your code without knowing more about it. The essence of your problem appears to be that you define f as a top-level immutable value in zone.ml but you want its value to follow g1, which is a mutable value.
The simplest suggestion would be to remove the definition of f from zone.ml and replace it everywhere in the file with !g1.
If you want to retain the name f at the top level in zone.ml, you have to redefine it as something other than an immutable value. A function is the most obvious choice:
let f () = !g1
Then you would replace uses of f in zone.ml by f () instead.

Related

menhir - associate AST nodes with token locations in source file

I am using Menhir to parse a DSL. My parser builds an AST using an elaborate collection of nested types. During later typecheck and other passes in error reports generated for a user, I would like to refer to source file position where it occurred. These are not parsing errors, and they generated after parsing is completed.
A naive solution would be to equip all AST types with additional location information, but that would make working with them (e.g. constructing or matching) unnecessary clumsy. What are the established practices to do that?
I don't know if it's a best practice, but I like the approach taken in the abstract syntax tree of the Frama-C system; see https://github.com/Frama-C/Frama-C-snapshot/blob/master/src/kernel_services/ast_data/cil_types.mli
This approach uses "layers" of records and algebraic types nested in each other. The records hold meta-information like source locations, as well as the algebraic "node" you can match on.
For example, here is a part of the representation of expressions:
type ...
and exp = {
eid: int; (** unique identifier *)
enode: exp_node; (** the expression itself *)
eloc: location; (** location of the expression. *)
}
and exp_node =
| Const of constant (** Constant *)
| Lval of lval (** Lvalue *)
| UnOp of unop * exp * typ
| BinOp of binop * exp * exp * typ
...
So given a variable e of type exp, you can access its source location with e.eloc, and pattern match on its abstract syntax tree in e.enode.
So simple, "top-level" matches on syntax are very easy:
let rec is_const_expr e =
match e.enode with
| Const _ -> true
| Lval _ -> false
| UnOp (_op, e', _typ) -> is_const_expr e'
| BinOp (_op, l, r, _typ) -> is_const_expr l && is_const_expr r
To match deeper in an expression, you have to go through a record at each level. This adds some syntactic clutter, but not too much, as you can pattern match on only the one record field that interests you:
let optimize_double_negation e =
match e.enode with
| UnOp (Neg, { enode = UnOp (Neg, e', _) }, _) -> e'
| _ -> e
For comparison, on a pure AST without metadata, this would be something like:
let optimize_double_negation e =
match e.enode with
| UnOp (Neg, UnOp (Neg, e', _), _) -> e'
| _ -> e
I find that Frama-C's approach works well in practice.
You need somehow to attach the location information to your nodes. The usual solution is to encode your AST node as a record, e.g.,
type node =
| Typedef of typdef
| Typeexp of typeexp
| Literal of string
| Constant of int
| ...
type annotated_node = { node : node; loc : loc}
Since you're using records, you can still pattern match without too much syntactic overhead, e.g.,
match node with
| {node=Typedef t} -> pp_typedef t
| ...
Depending on your representation, you may choose between wrapping each branch of your type individually, wrapping the whole type, or recursively, like in Frama-C example by #Isabelle Newbie.
A similar but more general approach is to extend a node not with the location, but just with a unique identifier and to use a final map to add arbitrary data to nodes. The benefit of this approach is that you can extend your nodes with arbitrary data as you actually externalize node attributes. The drawback is that you can't actually guarantee the totality of an attribute since finite maps are no total. Thus it is harder to preserve an invariant that, for example, all nodes have a location.
Since every heap allocated object already has an implicit unique identifier, the address, it is possible to attach data to the heap allocated objects without actually wrapping it in another type. For example, we can still use type node as it is and use finite maps to attach arbitrary pieces of information to them, as long as each node is a heap object, i.e., the node definition doesn't contain constant constructors (in case if it has, you can work around it by adding a bogus unit value, e.g., | End can be represented as | End of unit.
Of course, by saying an address, I do not literally mean the physical or virtual address of an object. OCaml uses a moving GC so an actual address of an OCaml object may change during a program execution. Moreover, an address, in general, is not unique, as once an object is deallocated its address can be grabbed by a completely different entity.
Fortunately, after ephemera were added to the recent version of OCaml it is no longer a problem. Moreover, an ephemeron will play nicely with the GC, so that if a node is no longer reachable its attributes (like file locations) will be collected by the GC. So, let's ground this with a concrete example. Suppose we have two nodes c1 and c2:
let c1 = Literal "hello"
let c2 = Constant 42
Now we can create a location mapping from nodes to locations (we will represent the latter as just strings)
module Locations = Ephemeron.K1.Make(struct
type t = node
let hash = Hashtbl.hash (* or your own hash if you have one *)
let equal = (=) (* or a specilized equal operator *)
end)
The Locations module provides an interface of a typical imperative hash table. So let's use it. In the parser, whenever you create a new node you should register its locations in the global locations value, e.g.,
let locations = Locations.create 1337
(* somewhere in the semantics actions, where c1 and c2 are created *)
Locations.add c1 "hello.ml:12:32"
Locations.add c2 "hello.ml:13:56"
And later, you can extract the location:
# Locations.find locs c1;;
- : string = "hello.ml:12:32"
As you see, although the solution is nice in the sense, that it doesn't touch the node data type, so the rest of your code can pattern match on it nice and easy, it is still a little bit dirty, as it requires global mutable state, that is hard to maintain. Also, since we are using an object address as a key, every newly created object, even if it was logically derived from the original object, will have a different identity. For example, suppose you have a function, that normalizes all literals:
let normalize = function
| Literal str -> Literal (normalize_literal str)
| node -> node
It will create a new Literal node from the original nodes, so all the location information will be lost. That means, that you need to update the location information, every time you derive one node from another.
Another issue with ephemera is that they can't survive the marshaling or serialization. I.e., if you store your AST somewhere in a file, and then you restore it, all nodes will loose their identity, and the location table will become empty.
Speaking of the "monadic approach" that you mentioned in comments. Though monads are magic, they still can't magically solve all the problems. They are not silver bullets :) In order to attach something to a node we still need to extend it with an extra attribute - either a location information directly or an identity through which we can attach properties indirectly. The monad can be useful for the latter though, as instead of having a global reference to the last assigned identifier, we can use a state monad, to encapsulate our id generator. And for the sake of completeness, instead of using a state monad or a global reference to generate unique identifiers, you can use UUID and get identifiers that are not only unique in a program run, but are also universally unique, in the sense that there are no other objects in the world with the same identifier, no matter how often you run your program (in the sane world). And although it looks like that generating the UUID doesn't use any state, underneath the hood it still uses an imperative random number generator, so it is sort of cheating, but still can seen as pure functional, as it doesn't contain observable effects.

Live variable analysis , is my explanation correct?

As stackexchange have not more tags about compiler tags , so I'm posting here , this question .
A variable x is said to be live at a statement Si in a program if the following three conditions hold simultaneously :
1. There exists a statement Sj that uses x
2. There is a path from Si to Sj in the flow
graph corresponding to the program
3. The path has no intervening assignment to x
including at Si and Sj
The variables which are live both at the statement in basic block 2 and at the statement in basic block 3 of the above control flow graph are
p, s, u
r, s, u
r, u
q, v
I try to explain :
As the wikipedia says "Stated simply: a variable is live if it holds a value that may be needed in the future."
As per the definition given in question, a variable is live if it is used in future before any new assignment.
Block 2 has ‘r’ and ‘v’ both as live variables. as they are used in block 4 before any new value assinged to them. Note that variable ‘u’ is not live in block 2 as ‘u’ is assigned a new value in block 1 before it is used in block 3. Variables ‘p’, ‘s’ and ‘q’ are also not live in block 2 due to same reason.
Block 3 has only ‘r’ as live variable as every other variable is assigned a new value before use.
Another explanation given as :
Only r.
p, s and u are assigned to in 1 and there is no intermediate use of them before that. Hence p, s and u are not live in both 2 and 3.
q is assigned to in 4 and hence is not live in both 2 and 3.
v is live at 3 but not at 2.
Only r is live at both 2 and 3.
But official GATE key said both r and u.
I see two things that are probably at least part of your confusion.
First, in the list of conditions for live variables, there is no restriction stating that Si != Sj, so this makes the definitions a little imprecise in my mind.
The second is your assertion "Note that variable ‘u’ is not live in block 2 as ‘u’ is assigned a new value in block 1 before it is used in block 3."
The way I would look at this would be this:
Upon entry into block 2, before the statement v = r + u is
executed (imagine a no-op empty statement inserted as the entry to
each block, and another as the exit from the block), then both r
and u must be live, because there exists an upcoming statement
that uses both, and there is a path in the code from the entry to
that statement that contains no intermediate assignments to either.
Rather than imagining these extra empty statements, using the
original semantics of the definitions, then we're just talking about
the case where Si == Sj, because both refer to the v = r + u
statement - there is trivially a path from a statement to itself
containing no additional assignments in that sense. I find it easier
to think about it with the extra empty entry and exit statements,
though.
After the execution of v = r + u, however, at the imaginary
block exit empty statement, it is now safe to consider u as not
live (alternatively, dead), because nothing in block 4 uses it, and
it's reassigned in block 1 before it's ever used again in either
block 2 or 3.
So, part of the confusion seems to be thinking of whether a variable is live in a particular block, which doesn't really fit with the definitions - you need to think about whether a variable is live at the point of a particular statement. You could make the case that the variable u is both alive and dead in block 2, depending on whether you look at it before execution of the lone statement, or after...

Clearing numerical values in Mathematica

I am working on fairly large Mathematica projects and the problem arises that I have to intermittently check numerical results but want to easily revert to having all my constructs in analytical form.
The code is fairly fluid I don't want to use scoping constructs everywhere as they add work overhead. Is there an easy way for identifying and clearing all assignments that are numerical?
EDIT: I really do know that scoping is the way to do this correctly ;-). However, for my workflow I am really just looking for a dirty trick to nix all numerical assignments after the fact instead of having the foresight to put down a Block.
If your assignments are on the top level, you can use something like this:
a = 1;
b = c;
d = 3;
e = d + b;
Cases[DownValues[In],
HoldPattern[lhs_ = rhs_?NumericQ] |
HoldPattern[(lhs_ = rhs_?NumericQ;)] :> Unset[lhs],
3]
This will work if you have a sufficient history length $HistoryLength (defaults to infinity). Note however that, in the above example, e was assigned 3+c, and 3 here was not undone. So, the problem is really ambiguous in formulation, because some numbers could make it into definitions. One way to avoid this is to use SetDelayed for assignments, rather than Set.
Another alternative would be to analyze the names in say Global' context (if that is the context where your symbols live), and then say OwnValues and DownValues of the symbols, in a fashion similar to the above, and remove definitions with purely numerical r.h.s.
But IMO neither of these approaches are robust. I'd still use scoping constructs and try to isolate numerics. One possibility is to wrap you final code in Block, and assign numerical values inside this Block. This seems a much cleaner approach. The work overhead is minimal - you just have to remember which symbols you want to assign the values to. Block will automatically ensure that outside it, the symbols will have no definitions.
EDIT
Yet another possibility is to use local rules. For example, one could define rule[a] = a->1; rule[d]=d->3 instead of the assignments above. You could then apply these rules, extracting them as say
DownValues[rule][[All, 2]], whenever you want to test with some numerical arguments.
Building on Andrew Moylan's solution, one can construct a Block like function that would takes rules:
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
You can then save your numeric rules in a variable, and use BlockRules[ savedrules, code ], or even define a function that would apply a fixed set of rules, kind of like so:
In[76]:= NumericCheck =
Function[body, BlockRules[{a -> 3, b -> 2`}, body], HoldAll];
In[78]:= a + b // NumericCheck
Out[78]= 5.
EDIT In response to Timo's comment, it might be possible to use NotebookEvaluate (new in 8) to achieve the requested effect.
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
nb = CreateDocument[{ExpressionCell[
Defer[Plot[Sin[a x], {x, 0, 2 Pi}]], "Input"],
ExpressionCell[Defer[Integrate[Sin[a x^2], {x, 0, 2 Pi}]],
"Input"]}];
BlockRules[{a -> 4}, NotebookEvaluate[nb, InsertResults -> "True"];]
As the result of this evaluation you get a notebook with your commands evaluated when a was locally set to 4. In order to take it further, you would have to take the notebook
with your code, open a new notebook, evaluate Notebooks[] to identify the notebook of interest and then do :
BlockRules[variablerules,
NotebookEvaluate[NotebookPut[NotebookGet[nbobj]],
InsertResults -> "True"]]
I hope you can make this idea work.

Elegantly alter a list of variables: Generalization of AddTo, TimesBy, etc

Suppose I've defined a list of variables
{a,b,c} = {1,2,3}
If I want to double them all I can do this:
{a,b,c} *= 2
The variables {a,b,c} now evaluate to {2,4,6}.
If I want to apply an arbitrary transformation function to them, I can do this:
{a,b,c} = f /# {a,b,c}
How would you do that without specifying the list of variables twice?
(Set aside the objection that I'd probably want an array rather than a list of distinctly named variables.)
You can do this:
Function[Null, # = f /# #, HoldAll][{a, b, c}]
For example,
In[1]:=
{a,b,c}={1,2,3};
Function[Null, #=f/##,HoldAll][{a,b,c}];
{a,b,c}
Out[3]= {f[1],f[2],f[3]}
Or, you can do the same without hard-coding f, but defining a custom set function. The effect of your foreach loop can be reproduced easily if you give it Listable attribute:
ClearAll[set];
SetAttributes[set, {HoldFirst, Listable}]
set[var_, f_] := var = f[var];
Example:
In[10]:= {a,b,c}={1,2,3};
set[{a,b,c},f1];
{a,b,c}
Out[12]= {f1[1],f1[2],f1[3]}
You may also want to get speed benefits for cases when your f is Listable, which is especially relevant now since M8 Compile enables user-defined functions to benefit from being Listabe in terms of speed, in a way that previously only built-in functions could. All you have to do for set for such cases (when you are after speed and you know that f is Listable) is to remove the Listable attribute of set.
I hit upon an answer to this when fixing up this old question: ForEach loop in Mathematica
Defining the each function as in the accepted answer to that question, we can answer this question with:
each[i_, {a,b,c}, i = f[i]]

Optional named arguments without wrapping them all in "OptionValue"

Suppose I have a function with optional named arguments but I insist on referring to the arguments by their unadorned names.
Consider this function that adds its two named arguments, a and b:
Options[f] = {a->0, b->0}; (* The default values. *)
f[OptionsPattern[]] :=
OptionValue[a] + OptionValue[b]
How can I write a version of that function where that last line is replaced with simply a+b?
(Imagine that that a+b is a whole slew of code.)
The answers to the following question show how to abbreviate OptionValue (easier said than done) but not how to get rid of it altogether: Optional named arguments in Mathematica
Philosophical Addendum: It seems like if Mathematica is going to have this magic with OptionsPattern and OptionValue it might as well go all the way and have a language construct for doing named arguments properly where you can just refer to them by, you know, their names. Like every other language with named arguments does. (And in the meantime, I'm curious what workarounds are possible...)
Why not just use something like:
Options[f] = {a->0, b->0};
f[args___] := (a+b) /. Flatten[{args, Options[f]}]
For more complicated code I'd probably use something like:
Options[f] = {a->0, b->0};
f[OptionsPattern[]] := Block[{a,b}, {a,b} = OptionValue[{a,b}]; a+b]
and use a single call to OptionValue to get all the values at once. (Main reason is that this cuts down on messages if there are unknown options present.)
Update, to programmatically generate the variables from the options list:
Options[f] = {a -> 0, b -> 0};
f[OptionsPattern[]] :=
With[{names = Options[f][[All, 1]]},
Block[names, names = OptionValue[names]; a + b]]
Here is the final version of my answer, containing the contributions from the answer by Brett Champion.
ClearAll[def];
SetAttributes[def, HoldAll];
def[lhs : f_[args___] :> rhs_] /; !FreeQ[Unevaluated[lhs], OptionsPattern] :=
With[{optionNames = Options[f][[All, 1]]},
lhs := Block[optionNames, optionNames = OptionValue[optionNames]; rhs]];
def[lhs : f_[args___] :> rhs_] := lhs := rhs;
The reason why the definition is given as a delayed rule in the argument is that this way we can
benefit from the syntax highlighting. Block trick is used because it fits the problem: it does not interfere with possible nested lexical scoping constructs inside your function, and therefore there is no danger of inadvertent variable capture. We check for presence of OptionsPattern since this code wil not be correct for definitions without it, and we want def to also work in that case.
Example of use:
Clear[f, a, b, c, d];
Options[f] = {a -> c, b -> d};
(*The default values.*)
def[f[n_, OptionsPattern[]] :> (a + b)^n]
You can look now at the definition:
Global`f
f[n$_,OptionsPattern[]]:=Block[{a,b},{a,b}=OptionValue[{a,b}];(a+b)^n$]
f[n_,m_]:=m+n
Options[f]={a->c,b->d}
We can test it now:
In[10]:= f[2]
Out[10]= (c+d)^2
In[11]:= f[2,a->e,b->q]
Out[11]= (e+q)^2
The modifications are done at "compile - time" and are pretty transparent. While this solution saves
some typing w.r.t. Brett's, it determines the set of option names at "compile-time", while Brett's - at "run-time". Therefore, it is a bit more fragile than Brett's: if you add some new option to the function after it has been defined with def, you must Clear it and rerun def. In practice, however, it is customary to start with ClearAll and put all definitions in one piece (cell), so this does not seem to be a real problem. Also, it can not work with string option names, but your original concept also assumes they are Symbols. Also, they should not have global values, at least not at the time when def executes.
Here's a kind of horrific solution:
Options[f] = {a->0, b->0};
f[OptionsPattern[]] := Module[{vars, tmp, ret},
vars = Options[f][[All,1]];
tmp = cat[vars];
each[{var_, val_}, Transpose[{vars, OptionValue[Automatic,#]& /# vars}],
var = val];
ret =
a + b; (* finally! *)
eval["ClearAll[", StringTake[tmp, {2,-2}], "]"];
ret]
It uses the following convenience functions:
cat = StringJoin##(ToString/#{##})&; (* Like sprintf/strout in C/C++. *)
eval = ToExpression[cat[##]]&; (* Like eval in every other lang. *)
SetAttributes[each, HoldAll]; (* each[pattern, list, body] *)
each[pat_, lst_, bod_] := ReleaseHold[ (* converts pattern to body for *)
Hold[Cases[Evaluate#lst, pat:>bod];]]; (* each element of list. *)
Note that this doesn't work if a or b has a global value when the function is called. But that was always the case for named arguments in Mathematica anyway.