Iterating through record fields - iteration

In Elm, one of my records (the type alias construct) have a lot of entries, and I was wondering if there's a built-in way to iterate through it. Either directly or by converting it to a Dict
So I was thinking something like:
let
myRecord = MyRecord ...
showEntry key value = ...
in
map showEntry myRecord
Thanks for your time!
Update
To answer the questions about the actual data, I have 12 fields of the same type. The 12 fields are all required to have that exact name, hence the record type and not a Dict. Whether it's named as a string in a record or a type in itself doesn't matter, as long as I can uniquely identify that value from the other 11. In code the record looks something like:
type alias InnerType { value : Int, ... }
type alias Record = { inner1 : InnerType, inner2 : InnerType, ... }
Since all the fields in the record have the same type, I just wanted to see if there's an easier way to go through them instead of naming all 12. Unless there's a better way to represent this, in which case I'm all ears! :-)

If your record will only ever have twelve elements and you don't mind sacrificing a little verbosity in your type definition for type safety (make impossible states impossible!), you could start with enumeration of those twelve indexes:
type Index
= Inner1
| Inner2
...
| Inner12
You could redefine Record to a type instead of a record type alias:
type Record =
Record
InnerType
InnerType
-- repeat InnerType 12 times
Now you can create a few convenience functions for getting and setting values in a type-safe manner:
get : Index -> Record -> InnerType
get i (Record i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 i11 i12) =
case i of
Inner1 -> i1
Inner2 -> i2
...
Inner12 -> i12
set : Index -> InnerType -> Record -> Record
set i val (Record i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 i11 i12) =
case i of
Inner1 -> Record val i2 i3 i4 i5 i6 i7 i8 i9 i10 i11 i12
Inner2 -> Record i1 val i3 i4 i5 i6 i7 i8 i9 i10 i11 i12
...
Inner12 -> Record i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 i11 val
And you can create a list from your record like this:
toList : Record -> List InnerType
toList (Record i1 i2 i3 i4 i5 i6 i7 i8 i9 i10 i11 i12) =
[ i1, i2, i3, i4, i5, i6, i7, i8, i9, i10, i11, i12 ]
As you can see, this gets verbose, but it's only in the definition of the Record type that it becomes verbose. If you confine all this code to a Record module, then accessing and modifying records is clean, concise, and more importantly, type safe. You are not subject to a Dictionary where a key may not exist, or a List or Array where an index might not exist. Your Record type would be sealed up nice and tight, at the expense of a little verbosity in the definition.
Here is a gist containing the full definition.

Looks like you need a List (or an Array) of InnerType, as you said the structure itself is fixed at 12 of these. As for iterating, here's the example for json encoding a list of InnerType
inners
|> List.indexedMap (\idx inner -> ("inner" ++ toString idx, encodeInner inner))
|> Json.Encode.object

Related

Coq: Is it possible to prove that, if two records are different, then one of their fields is different?

Say you have a record:
Record Example := {
fieldA : nat;
fieldB : nat
}.
Can we prove:
Lemma record_difference : forall (e1 e2 : Example),
e1 <> e2 ->
(fieldA e1 <> fieldA e2)
\/
(fieldB e1 <> fieldB e2).
If so, how?
On the one hand, it looks true, since Records are absolutely defined by their fields. On the other, without knowing what made e1 different from e2 in the first place, how are we supposed to decide which side of the disjunction to prove?
As a comparison, note that if there is only one field in the record, we are able to prove the respective lemma:
Record SmallExample := {
field : nat
}.
Lemma record_dif_small : forall (e1 e2 : SmallExample),
e1 <> e2 -> field e1 <> field e2.
Proof.
unfold not; intros; apply H.
destruct e1; destruct e2; simpl in H0.
f_equal; auto.
Qed.
On the other, without knowing what made e1 different from e2 in the first place, how are we supposed to decide which side of the disjunction to prove?
That is precisely the point: we need to figure out what makes both records different. We can do this by testing whether fieldA e1 = fieldA e2.
Require Import Coq.Arith.PeanoNat.
Record Example := {
fieldA : nat;
fieldB : nat
}.
Lemma record_difference : forall (e1 e2 : Example),
e1 <> e2 ->
(fieldA e1 <> fieldA e2)
\/
(fieldB e1 <> fieldB e2).
Proof.
intros [n1 m1] [n2 m2] He1e2; simpl.
destruct (Nat.eq_dec n1 n2) as [en|nen]; try now left.
right. intros em. congruence.
Qed.
Here, Nat.eq_dec is a function from the standard library that allows us to check whether two natural numbers are equal:
Nat.eq_dec : forall n m, {n = m} + {n <> m}.
The {P} + {~ P} notation denotes a special kind of boolean that gives you a proof of P or ~ P when destructed, depending on which side it lies on.
It is worth stepping through this proof to see what is going on. On the third line of the proof, for instance, executing intros em leads to the following goal.
n1, m1, n2, m2 : nat
He1e2 : {| fieldA := n1; fieldB := m1 |} <> {| fieldA := n2; fieldB := m2 |}
en : n1 = n2
em : m1 = m2
============================
False
If en and em hold, then the two records must be equal, contradicting He1e2. The congruence tactic simply instructs Coq to try to figure this out by itself.
Edit
It is interesting to see how far one can get without decidable equality. The following similar statement can be proved trivially:
forall (A B : Type) (p1 p2 : A * B),
p1 = p2 <-> fst p1 = fst p2 /\ snd p1 = snd p2.
By contraposition, we get
forall (A B : Type) (p1 p2 : A * B),
p1 <> p2 <-> ~ (fst p1 = fst p2 /\ snd p1 = snd p2).
It is here that we get stuck without a decidability assumption. De Morgan's laws would allow us to convert the right-hand side to a statement of the form ~ P \/ ~ Q; however, their proof appeals to decidability, which is not generally available in Coq's constructive logic.

Is there a way to automate a Coq proof with rewrite steps?

I am working on a proof and one of my subgoals looks a bit like this:
Goal forall
(a b : bool)
(p: Prop)
(H1: p -> a = b)
(H2: p),
negb a = negb b.
Proof.
intros.
apply H1 in H2. rewrite H2. reflexivity.
Qed.
The proof does not rely on any outside lemmas and just consists of applying one hypothesis in the context to another hypothesis and doing rewriting steps with a known hypothesis.
Is there a way to automate this? I tried doing intros. auto. but it had no effect. I suspect that this is because auto can only do apply steps but no rewrite steps but I am not sure. Maybe I need some stronger tactic?
The reason I want to automate this is that in my original problem I actually have a large number of subgoals that are very similar to this one, but with small differences in the names of the hypotheses (H1, H2, etc), the number of hypotheses (sometimes there is an extra induction hypothesis or two) and the boolean formula at the end. I think that if I could use automation to solve this my overall proof script would be more concise and robust.
edit: What if there is a forall in one of the hypothesis?
Goal forall
(a b c : bool)
(p: bool -> Prop)
(H1: forall x, p x -> a = b)
(H2: p c),
negb a = negb b.
Proof.
intros.
apply H1 in H2. subst. reflexivity.
Qed
When you see a repetitive pattern in the way you prove some lemmas, you can often define your own tactics to automate the proofs.
In your specific case, you could write the following:
Ltac rewrite_all' :=
match goal with
| H : _ |- _ => rewrite H; rewrite_all'
| _ => idtac
end.
Ltac apply_in_all :=
match goal with
| H : _, H2 : _ |- _ => apply H in H2; apply_in_all
| _ => idtac
end.
Ltac my_tac :=
intros;
apply_in_all;
rewrite_all';
auto.
Goal forall (a b : bool) (p: Prop) (H1: p -> a = b) (H2: p), negb a = negb b.
Proof.
my_tac.
Qed.
Goal forall (a b c : bool) (p: bool -> Prop)
(H1: forall x, p x -> a = b)
(H2: p c),
negb a = negb b.
Proof.
my_tac.
Qed.
If you want to follow this path of writing proofs, a reference that is often recommended (but that I haven't read) is CPDT by Adam Chlipala.
This particular goal can be solved like this:
Goal forall (a b : bool) (p: Prop) (H1: p -> a = b) (H2: p),
negb a = negb b.
Proof.
now intuition; subst.
Qed.
Or, using the destruct_all tactic (provided you don't have a lot of boolean variables):
intros; destruct_all bool; intuition.
The above has been modeled after the destr_bool tactic, defined in Coq.Bool.Bool:
Ltac destr_bool :=
intros; destruct_all bool; simpl in *; trivial; try discriminate.
You could also try using something like
destr_bool; intuition.
to fire up powerful intuition after simpler destr_bool.
now is defined in Coq.Init.Tactics as follows
Tactic Notation "now" tactic(t) := t; easy.
easy is defined right above it and (as its name suggests) can solve easy goals.
intuition can solve goals which require applying the laws of (intuitionistic) logic. E.g. the following two hypotheses from the original version of the question require an application of the modus ponens law.
H1 : p -> false = true
H2 : p
auto, on the other hand, doesn't do that by default, it also doesn't solve contradictions.
If your hypotheses include some first-order logic statements, the firstorder tactic may be the answer (like in this case) -- just replace intuition with it.

Is there a way to cache a function result in elm?

I want to calculate nth Fibonacci number with O(1) complexity and O(n_max) preprocessing.
To do it, I need to store previously calculated value like in this C++ code:
#include<vector>
using namespace std;
vector<int> cache;
int fibonacci(int n)
{
if(n<=0)
return 0;
if(cache.size()>n-1)
return cache[n-1];
int res;
if(n<=2)
res=1;
else
res=fibonacci(n-1)+fibonacci(n-2);
cache.push_back(res);
return res;
}
But it relies on side effects which are not allowed in Elm.
Fibonacci
A normal recursive definition of fibonacci in Elm would be:
fib1 n = if n <= 1 then n else fib1 (n-2) + fib1 (n-1)
Caching
If you want simple caching, the maxsnew/lazy library should work. It uses some side effects in the native JavaScript code to cache computation results. It went through a review to check that the native code doesn't expose side-effects to the Elm user, for memoisation it's easy to check that it preserves the semantics of the program.
You should be careful in how you use this library. When you create a Lazy value, the first time you force it it will take time, and from then on it's cached. But if you recreate the Lazy value multiple times, those won't share a cache. So for example, this DOESN'T work:
fib2 n = Lazy.lazy (\() ->
if n <= 1
then n
else Lazy.force (fib2 (n-2)) + Lazy.force (fib2 (n-1)))
Working solution
What I usually see used for fibonacci is a lazy list. I'll just give the whole compiling piece of code:
import Lazy exposing (Lazy)
import Debug
-- slow
fib1 n = if n <= 1 then n else fib1 (n-2) + fib1 (n-1)
-- still just as slow
fib2 n = Lazy.lazy <| \() -> if n <= 1 then n else Lazy.force (fib2 (n-2)) + Lazy.force (fib2 (n-1))
type List a = Empty | Node a (Lazy (List a))
cons : a -> Lazy (List a) -> Lazy (List a)
cons first rest =
Lazy.lazy <| \() -> Node first rest
unsafeTail : Lazy (List a) -> Lazy (List a)
unsafeTail ll = case Lazy.force ll of
Empty -> Debug.crash "unsafeTail: empty lazy list"
Node _ t -> t
map2 : (a -> b -> c) -> Lazy (List a) -> Lazy (List b) -> Lazy (List c)
map2 f ll lr = Lazy.map2 (\l r -> case (l,r) of
(Node lh lt, Node rh rt) -> Node (f lh rh) (map2 f lt rt)
) ll lr
-- lazy list you can index into, better speed
fib3 = cons 0 (cons 1 (map2 (+) fib3 (unsafeTail fib3)))
So fib3 is a lazy list that has all the fibonacci numbers. Because it uses fib3 itself internally, it'll use the same (cached) lazy values and not need to compute much.

CTL Equivalence checking

I'm told the following CTL formulas aren't equivalent. However, I can't find a model in which one is true and the other isn't. CTL is a computational temporal logic.
Formula 1: AF p OR AF q
Formula 2: AF( p OR q )
The first says: For all paths starting from the begin state there is a future in which p holds OR for all paths starting from the begin state there is a future in which q holds.
The second: For all paths starting from the begin state there is a future in which p OR q holds.
The model is a little bit tricky. Firstly, one should note that AF(p OR q) implies AF p OR AF q. So, we are looking for a model in which AF (p OR q) is true but AF p OR AF q is false.
I am assuming that you are familiar with Kripke model notation described in Logic in Computer Science textbook by M. Huth and M. Ryan (see http://www.cs.bham.ac.uk/research/projects/lics/).
Let M = (S, R, L) be a model with S = {s0, s1, s2} as the set of possible states, R = {(s0,s1), (s0,s2), (s1,s1), (s1,s2), (s2,s2)} as the transition relation, and L is a labeling function defined as follows: L(s0) = {} (empty set), L(s1) = {p}, and L(s2) = {q}.
Suppose the starting state is s0. It is clear that AF (p OR q) holds at s0. However, AF p OR AF q is not satisfied at s0. To prove this, we have to show that s0 does not satisfy AF p *and* s0 does not satisfy AF q.
AF p is not satisfied at s0 since we can choose the path s0 -> s2 -> s2 -> s2 -> ...
Similarly, AF q is not satisfied at s1 since we can choose the path s0 -> s1 -> s1 -> s1 -> ...

Closure properties of context free languages

I have the following problem:
Languages L1 = {a^n * b^n : n>=0} and L2 = {b^n * a^n : n>=0} are
context free languages so they are closed under the L1L2 so L={a^n *
b^2n A^n : n>=0} must be context free too because it is generated by a
closure property.
I have to prove if this is true or not.
So I checked the L language and I don’t think that it is context free then I also saw that L2 is just L1 reversed.
Do I have to check if L1, L2 are deterministic?
L1={anbn : n>=0} and L2={bnan : n>=0} are both
context free.
Since context-free languages are closed under concatenation, L3=L1L2 is also context-free.
However, L3 is not the same language as L4={anb2nan : n >= 0}.
The string abbbaa is in L3, but not L4.
So is L4 context-free? If so, it must obey the pumping lemma for context-free languages.
Let p be the pumping length of L4. Choose s = apb2pap.
Then s is in L4, and |s| > p. Therefore, if L4 is context-free, we can write s
as uvxyz, with |vxy| <= p, |vy| >= 1, and uvnxynz is in L4 for
any n >= 0.
Observe the following properties of any nonempty string in L4: The count of a's equals the count of b's. There is exactly one occurrence of the substring 'ab', and exactly one
occurrence of the substring 'ba'. The length of the initial string of a's equals the length of the final string of a's.
We can use these observations to constrain the possible choices of v and y in our pumping argument for L4. Neither v nor y can contain the substring 'ab', because then, as v and y are pumped an arbitrary number of times, the output string would contain multiple occurrences of 'ab', and therefore cannot be an element of L4. The same argument applies to the substring 'ba'.
So v must be either empty, all a's, or all b's. The same applies to y.
Furthermore, if v is all a's, then y must consist of the same number of b's; otherwise, the pumped string would contain unequal numbers of a's and b's since v and y are pumped by
the same n. Likewise, if v is all b's, then y must be the same number of a's.
But if v is all a's, and y is all b's, the final string of a's is unaffected by pumping v and y, therefore the leading string of a's will no longer match the trailing string of a's.
Similarly, if v is all b's and y is all a's, the leading and trailing strings of a's will again have different lengths as v and y are pumped.
v and y cannot both be empty, since that would violate the condition |vy| >= 1 for
the CFL pumping lemma. But since we have established that |v| = |y|, it follows
that neither v nor y can be empty.
But if v cannot be empty, cannot be all a's, cannot be all b's, and cannot contain
the substrings 'ab' or 'ba', then there is no possible choice of uvxyz for which
the pumped version of s is still in L4. Therefore L4 is not context-free.
I'm not sure that it is -- note that in each of the defintions of L1 and L2, n is scoped within that definition, i.e. they are two different variables. When you combine them you should rename one, and get instead:
L = {a^n * b^n b^m * a^m : n,m>=0}
This is a very different language from your L, but it is obviously a context free one.