Purpose of anonymous modules in Agda - module

Going at the root of Agda standard library, and issuing the following command:
grep -r "module _" . | wc -l
Yields the following result:
843
Whenever I encounter such anonymous modules (I assume that's what they are called), I quite cannot figure out what their purpose is, despite of their apparent ubiquity, nor how to use them because, by definition, I can't access their content using their name, although I assume this should be possible, otherwise their would be no point in even allowing them to be defined.
The wiki page:
https://agda.readthedocs.io/en/v2.6.1/language/module-system.html#anonymous-modules
has a section called "anonymous modules" which is in fact empty.
Could somebody explain what the purpose of anonymous modules is ?
If possible, any example to emphasize the relevance of the definition of such modules, as well as how to use their content would be very much appreciated.
Here are the possible ideas I've come up with, but none of them seems completely satisfying:
They are a way to regroup thematically identical definitions inside an Agda file.
Their name is somehow infered by Agda when using the functions they provide.
Their content is only meant to be visible / used inside their englobing module (a bit like a private block).

Anonymous modules can be used to simplify a group of definitions which share some arguments. Example:
open import Data.Empty
open import Data.Nat
<⇒¬≥ : ∀ {n m} → n < m → n ≥ m → ⊥
<⇒¬≥ = {!!}
<⇒> : ∀ {n m} → n < m → m > n
<⇒> = {!!}
module _ {n m} (p : n < m) where
<⇒¬≥′ : n ≥ m → ⊥
<⇒¬≥′ = {!!}
<⇒>′ : m > n
<⇒>′ = {!!}
Afaik this is the only use of anonymous modules. When the module _ scope is closed, you can't refer to the module anymore, but you can refer to its definitions as if they hadn't been defined in a module at all (but with extra arguments instead).

Related

Proving associativity of sequential composition in Isabelle

Consider the following inductive definition describing the small step semantics of the language of guarded commands:
inductive small_step :: "com × state ⇒ com × state ⇒ bool" (infix "→" 55)
where
Assign: "(x ::= a, s) → (SKIP, s(x := aval a s))" |
Seq1: "(SKIP;;c2,s) → (c2,s)" |
Seq2: "(c1,s) → (c1',s') ⟹ (c1;;c2,s) → (c1';;c2,s')" |
IfBlock: "(b,c) ∈ set gcs ⟹ bval b s ⟹ (IF gcs FI,s) → (c,s)" |
DoTrue: "(b,c) ∈ set gcs ⟹ bval b s ⟹ (DO gcs OD,s) → (c;;DO gcs OD,s)" |
DoFalse: "(∀ b c. (b,c) ∈ set gcs ⟶ ¬ bval b s) ⟹ (DO gcs OD,s) → (SKIP,s)"
I want to prove:
lemma "((c1 ;; c2) ;; c3) ~ (c1 ;; (c2 ;; c3))"
where:
definition equiv_c :: "com ⇒ com ⇒ bool" (infix "~" 50) where
"c ~ c' ≡ ∀ s c0 s0. (c,s) → (c0,s0) = (c',s) → (c0,s0)"
And it is getting surprinsingly difficult to do so. The closest solution I have found is given in "An operational semantics for the Guarded Command Language" by Johan J. Lukkien. However, in that article programs are modelled as sequences of states whereas here I'm modelling them as command configurations, plus states. Perhaps, there is a relation between the two but my tries so far have been frustrated.
Do you see a way to prove this lemma in Isabelle?
The desired lemma as stated does not hold. So you will not be able to prove this. I can see two main problems:
The → denotes one step of the execution, but equivalence should talk about the whole behaviour, not just one step. After one step, it is likely that the resulting programs are still different.
The semantics can get stuck for some programs, e.g., IF [] FI. Such stuck states make it hard to state equivalence if you want to say something about the remaining program c0. For example, take c1 = SKIP;; IF [] FI in your lemma and c2 = c3 = SKIP. Then no partially evaluated command reachable from (c1 ;; c2) ;; c3 is identical to one reachable from c1 ;; (c2 ;; c3).
I recommend that you first figure out what the behaviour of a program is supposed to be. For a while language, this is typically the set of reachable final states and possibly non-termination. Then, you have to decide what kind of equivalence you are interested in. Typically, one looks at trace equivalence or at bisimulation, which are not the same for non-deterministic programs. And the equivalence notion will determine how to prove such a lemma.

Redefining operator precedence in a module for exported predicate

I would like to write a module that exports a predicate where the user should be able to access a predicate p/1 as a prefix operator. I have defined the following module:
:- module(lala, [p/1]).
:- op(500, fy, [p]).
p(comment).
p(ca).
p(va).
and load it now via:
?- use_module(lala).
true.
Unfortunately, a query fails:
?- p X.
ERROR: Syntax error: Operator expected
ERROR: p
ERROR: ** here **
ERROR: X .
After setting the operator precedence properly, everything works:
?- op(500, fy, [p]).
true.
?- p X.
X = comment ;
X = ca ;
X = va.
I used SWI Prolog for my output but the same problem occurs in YAP as well (GNU Prolog does not support modules). Is there a way the user does not need to set the precedence themselves?
You can export the operator with the module/2 directive.
For example:
:- module(lala, [p/1,
op(500, fy, p)]).
Since the operator is then also available in the module, you can write for example:
p comment.
p ça.
p va.
where p is used as a prefix operator.

Constraining a function type with an Idris interface

I'd like to create a function with type constrained by an interface. My intention is to build a simple monoid solver using VerifiedMonoid defined inClasses.Verified module from contrib package.
Idris gives me the following error:
Monoid-prover.idr line 29 col 5:
When checking type of MonoidProver.eval:
Can't find implementation for VerifiedMonoid a
for the type signature of eval:
eval : VerifiedMonoid a => Expr n -> Env a n -> a
Am I doing something silly or missing something? Such constrained types (like eval's type) can be interpreted like Haskell's types?
The complete code is shown below.
module MonoidProver
import Classes.Verified
import Data.Fin
import Data.Vect
%default total
infixl 5 :+:
data Expr : Nat -> Type where
Var : Fin n -> Expr n
Id : Expr n
(:+:) : Expr n -> Expr n -> Expr n
Env : VerifiedMonoid a => (a : Type) -> Nat -> Type
Env a n = Vect n a
eval : VerifiedMonoid a => Expr n -> Env a n -> a
eval (Var i) env = index i env
eval Id env = neutral
eval (e :+: e') env = eval e env <+> eval e' env
As #Cactus mentioned in a comment, the problem here is that the type signature for Env is not correct. Specifically, since the constraint is on the value of the argument, it is a dependent (pi) type, and Idris enforces that dependent types go left-to-right (be introduced before they are used). My understanding is that basically the compiler will see the argument x : a as being shorthand for , where is the part of the type signature to the right of the quantifier. This means that (a : Type) actually binds a fresh a, which is not the same a that was used in the VerifiedMonoid a constraint (which is implicitly universally bound).
Luckily, since Idris isn't Haskell (where all type class constraints must be at the head of the type signature), this is easy to fix: just change your type signature to Env : (a : Type) -> VerifiedMonoid a => Nat -> Type, so that a is bound before it is used!

How to enable hints and warnings in the online REPL

I figured that I can do it on the command line REPL like so:
java -jar frege-repl-1.0.3-SNAPSHOT.jar -hints -warnings
But how can I do the same in http://try.frege-lang.org
Hints and warnings are already enabled by default. For example,
frege> f x = f x
function f :: α -> β
3: application of f will diverge.
Perhaps we can make it better by explicitly saying it as warning or hint (instead of colors distinguishing them) something like:
[Warning] 3: application of f will diverge.
and providing an option to turn them on/off.
Update:
There was indeed an issue (Thanks Ingo for pointing that out!) with showing warnings that are generated in a later phase during the compilation. This issue has been fixed and the following examples now correctly display warnings in the REPL:
frege> h x = 0; h false = 42
function h :: Bool -> Int
4: equation or case alternative cannot be reached.
frege> f false = 6
function f :: Bool -> Int
5: function pattern is refutable, consider
adding a case for true

How to prevent common sub-expression elimination (CSE) with GHC

Given the program:
import Debug.Trace
main = print $ trace "hit" 1 + trace "hit" 1
If I compile with ghc -O (7.0.1 or higher) I get the output:
hit
2
i.e. GHC has used common sub-expression elimination (CSE) to rewrite my program as:
main = print $ let x = trace "hit" 1 in x + x
If I compile with -fno-cse then I see hit appearing twice.
Is it possible to avoid CSE by modifying the program? Is there any sub-expression e for which I can guarantee e + e will not be CSE'd? I know about lazy, but can't find anything designed to inhibit CSE.
The background of this question is the cmdargs library, where CSE breaks the library (due to impurity in the library). One solution is to ask users of the library to specify -fno-cse, but I'd prefer to modify the library.
How about removing the source of the trouble -- the implicit effect -- by using a sequencing monad that introduces that effect? E.g. the strict identity monad with tracing:
data Eval a = Done a
| Trace String a
instance Monad Eval where
return x = Done x
Done x >>= k = k x
Trace s a >>= k = trace s (k a)
runEval :: Eval a -> a
runEval (Done x) = x
track = Trace
now we can write stuff with a guaranteed ordering of the trace calls:
main = print $ runEval $ do
t1 <- track "hit" 1
t2 <- track "hit" 1
return (t1 + t2)
while still being pure code, and GHC won't try to get to clever, even with -O2:
$ ./A
hit
hit
2
So we introduce just the computation effect (tracing) sufficient to teach GHC the semantics we want.
This is extremely robust to compile optimizations. So much so that GHC optimizes the math to 2 at compile time, yet still retains the ordering of the trace statements.
As evidence of how robust this approach is, here's the core with -O2 and aggressive inlining:
main2 =
case Debug.Trace.trace string trace2 of
Done x -> case x of
I# i# -> $wshowSignedInt 0 i# []
Trace _ _ -> err
trace2 = Debug.Trace.trace string d
d :: Eval Int
d = Done n
n :: Int
n = I# 2
string :: [Char]
string = unpackCString# "hit"
So GHC has done everything it could to optimize the code -- including computing the math statically -- while still retaining the correct tracing.
References: the useful Eval monad for sequencing was introduced by Simon Marlow.
Reading the source code to GHC, the only expressions that aren't eligible for CSE are those which fail the exprIsBig test. Currently that means the Expr values Note, Let and Case, and expressions which contain those.
Therefore, an answer to the above question would be:
unit = reverse "" `seq` ()
main = print $ trace "hit" (case unit of () -> 1) +
trace "hit" (case unit of () -> 1)
Here we create a value unit which resolves to (), but which GHC can't determine the value for (by using a recursive function GHC can't optimise away - reverse is just a simple one to hand). This means GHC can't CSE the trace function and it's 2 arguments, and we get hit printed twice. This works with both GHC 6.12.4 and 7.0.3 at -O2.
I think you can specify the -fno-cse option in the source file, i.e. by putting a pragma
{-# OPTIONS_GHC -fno-cse #-}
on top.
Another method to avoid common subexpression elimination or let floating in general is to introduce dummy arguments. For example, you can try
let x () = trace "hi" 1 in x () + x ()
This particular example won't necessarily work; ideally, you should specify a data dependency via dummy arguments. For instance, the following is likely to work:
let
x dummy = trace "hi" $ dummy `seq` 1
x1 = x ()
x2 = x x1
in x1 + x2
The result of x now "depends" on the argument dummy and there is no longer a common subexpression.
I'm a bit unsure about Don's sequencing monad (posting this as answer because the site doesn't let me add comments). Modifying the example a bit:
main :: IO ()
main = print $ runEval $ do
t1 <- track "hit 1" (trace "really hit 1" 1)
t2 <- track "hit 2" 2
return (t1 + t2)
This gives us the following output:
hit 1
hit 2
really hit 1
That is, the first trace fires when the t1 <- ... statement is executed, not when t1 is actually evaluated in return (t1 + t2). If we define the monadic bind operator as
Done x >>= k = k x
Trace s a >>= k = k (trace s a)
instead, the output will reflect the actual evaluation order:
hit 1
really hit 1
hit 2
That is, the traces will fire when the (t1 + t2) statement is executed, which is (IMO) what we really want. For example, if we change (t1 + t2) to (t2 + t1), this solution produces the following output:
hit 2
really hit 2
hit 1
The output of the original version remains unchanged, and we don't see when our terms are really evaluated:
hit 1
hit 2
really hit 2
Like the original solution, this also works with -O3 (tested on GHC 7.0.3).