In HOTT and also in COQ one cannot prove UIP, i.e.
\Prod_{p:a=a} p = refl a
But one can prove:
\Prod_{p:a=a} (a,p) = (a, refl a)
Why is this defined as it is?
Is it, because one wants to have a nice homotopy interpretation?
Or is there some natural, deeper reason for this definition?
Today we know of a good reason for rejecting UIP: it is incompatible with the principle of univalence from homotopy type theory, which roughly says that isomorphic types can be identified. However, as far as I am aware, the reason that Coq's equality does not validate UIP is mostly a historical accident inherited from one of its ancestors: Martin-Löf's intensional type theory, which predates HoTT by many years.
The behavior of equality in ITT was originally motivated by the desire to keep type checking decidable. This is possible in ITT because it requires us to explicitly mark every rewriting step in a proof. (Formally, these rewriting steps correspond to the use of the equality eliminator eq_rect in Coq.) By contrast, Martin-Löf designed another system called extensional type theory where rewriting is implicit: whenever two terms a and b are equal, in the sense that we can prove that a = b, they can be used interchangeably. This relies on an equality reflection rule which says that propositionally equal elements are also definitionally equal. Unfortunately, there is a price to pay for this convenience: type checking becomes undecidable. Roughly speaking, the type-checking algorithm relies crucially on the explicit rewriting steps of ITT to guide its computation, whereas these hints are absent in ETT.
We can prove UIP easily in ETT because of the equality reflection rule; however, it was unknown for a long time whether UIP was provable in ITT. We had to wait until the 90's for the work of Hofmann and Streicher, which showed that UIP cannot be proved in ITT by constructing a model where UIP is not valid. (Check also these slides by Hofmann, which explain the issue from a historic perspective.)
Edit
This doesn' t mean that UIP is incompatible with decidable type checking: it was shown later that it can be derived in other decidable variants of Martin-Löf type theory (such as Agda), and it can be safely added as an axiom in a system like Coq.
Intuitively, I tend to think of a = a as pi_1(A,a), i.e. the class of paths from a to itself modulo homotopy equivalence; whereas I think of { x:A | a = x } as the universal covering space of A, i.e. paths from a to some other point of A modulo homotopy equivalence. So, while pi_1(A,a) is often non-trivial, we do have that the universal covering space of A is contractible.
Related
I am searching some SWI-Prolog function which is able to make some set union with variables as parameters inside. My aim is to make the union first and define the parameters at further on in source code.
Means eg. I have some function union and the call union(A, B, A_UNION_B) makes sense. Means further more the call:
union(A, [1,2], C), A=[3].
would give me as result
C = [3, 1, 2].
(What you call union/3 is most probably just concatenation, so I will use append/3 for keeping this answer short.)
What you expect is impossible without delayed goals or constraints. To see this, consider the following failure-slice
?- append(A, [1,2], C), false, A=[3].
loops, unexpected. % observed, but for us unexpected
false. % expected, but not the case
This query must terminate, in order to make the entire question useful. But there are infinitely many lists of different length for A. So in order to describe all possible solutions, we would need infinitely many answer substitutions, like
?- append(A, [1,2], C).
A = [], C = [1,2]
; A = [_A], C = [_A,1,2]
; A = [_A,_B], C = [_A,_B,1,2]
; A = [_A,_B,_C], C = [_A,_B,_C,1,2]
; ... .
The only way around is to describe that set of solutions with finitely many answers. One possibility could be:
?- when((ground(A);ground(C)), append(A,B,C)).
when((ground(A);ground(C)),append(A,B,C)).
Essentially it reads: Yes, the query is true, provided the query is true.
While this solves your exact problem, it will now delay many otherwise succeeding goals, think of A = [X], B = [].
A more elaborate version could provide more complex tests. But it would require a somehow different definition than append/3 is. Some systems like sicstus-prolog provide block declarations to make this more smoothly (SWI has a coarse emulation for that).
So it is possible to make this even better, but the question remains whether or not this makes much sense. After all, debugging delayed goals becomes more and more difficult with larger programs.
In many situations it is preferable to prevent this and produce an instantiation error in its stead as iwhen/2 does:
?- iwhen((ground(A);ground(C)),append(A,B,C)).
error(instantiation_error,iwhen/2).
That error is not the nicest answer possible, but at least it is not incorrect. It says: You need to provide more instantiations.
If you really want to solve this problem for the general case you have to delve into E-unification. That is an area with most trivial problem statements and extremely evolved answers. Often, just decidability is non-trivial let alone an effective algorithm. For your particular question, either ACI (for sets) or ANlr (for concatenation) are of interest. Where ACI requires solving Diophantine Equations and associative unification alone is even more complex than that. I am unaware of any such implementation for a Prolog system that solves the general problem.
Prolog IV offered an associative infix operator for concatenation but simply delayed more complex cases. So debugging these remains non-trivial.
The use of context is briefly mentioned in the K tutorial as a way to customize the order evaluation. But I'm also seeing other context statements that contain rewrite arrows in them, like this one in the untyped simple language.
context ++(HOLE => lvalue(HOLE))
rule <k> ++loc(L) => I +Int 1 ...</k>
<store>... L |-> (I => I +Int 1) ...</store> [increment]
Could someone explain how exactly context work in K? In particular, I'm interested in:
Is there a more general usage of context in K than just stating the order of evaluation?
How does the order in which context statements are declared affect the semantics?
Thank you!
More detailed information about context declarations in K can be found in K's documentation here. In particular, contexts with rewrite arrows mean that heating and cooling will wrap the term to be heated or cooled in a particular symbol. In your example, that symbol is lvalue.
To answer your questions specifically:
Context declarations, like strictness attributes, are primarily used in order to specify the evaluation strategy. While in theory they can be used for other things, in practice this rarely happens. That said, evaluation strategies can be complex, which is part of why K has so many different features relating to evaluation strategy. In the example you mentioned, we use rewrites in a context declaration in order to provide a separate set of rules for evaluating lvalues (ie, to avoid actually evaluating all the way to a value, and only evaluate to a location).
K's sentences are unordered. Within a single module, you can reorder any of its sentences (except import statements, which must appear first) and there will not be an effect on the intended semantics (although backends may result in slightly different behavior for concrete execution if your semantics is nondeterministic). This includes context declarations.
I recently read that option-operand separation is a principle that was introduced in the Eiffel language (I've never used Eiffel).
From the Wikipedia article:
[Option–operand separation] states that an operation's arguments should contain only operands — understood as information necessary to its operation — and not options — understood as auxiliary information. Options are supposed to be set in separate operations.
Does this mean that a function should only contain "essential" arguments that are part of its functionality, and that there shouldn't be any arguments that change the functionality (which instead should be a separate function)?
Could someone explain it simply, preferably with pseudocode example(s)?
Yes, this is the idea: arguments should not be used to select particular behavior. Different methods (features in Eiffel terms) should be used instead.
Example. Suppose, there is a method that moves a 2-D figure to a given position. The position could be specified using either polar or Cartesian coordinates:
move (coordinate_1, coordinate_2: REAL_64; is_polar: BOOLEAN)
-- Move the figure to the position (coordinate_1, coordinate_2)
-- using polar system if is_polar is True, and Cartesian system otherwise.
According to the principle, it's better to define two functions:
cartesian_move (x, y: REAL_64)
-- Move the figure to the position with Cartesian coordinates (x, y).
polar_move (rho, phi: REAL_64)
-- Move the figure to the position with polar coordinates (rho, phi).
Although the principle seems to be universally applicable, some object-oriented languages does not provide sufficient means for that in certain cases. The obvious example are constructors that in many languages have the same name, so using options becomes the only choice (a workaround would be to use object factories in these cases).
I would like to test out the Lambda Calculus interpreter that I've written against a fairly large test set of Lambda Calculus expressions. Does anyone know of a Lambda Calc expression generator I can use (couldn't find anything upon an initial search on Google)? These expressions would obviously have to be properly formed.
Better yet, while I have created various examples myself and worked out the solutions so I could check the results, does anyone know of a good (and large) set of worked out Lambda Calculus reduction problems with solutions? I can type in the expressions myself so it's more important to just have a good variety of simpler (and larger) lambda calculus expressions upon which I can test my interpreter (which at the moment models Normal Order and Call by Name evaluation strategies).
Any help or guidance would be greatly appreciated.
Asperti and Guerrini (1998, The Optimal Implementation of Functional Programming Languages, CUP Press; see especially chapters 5 and 6) describe some of the more painful lambda terms that arise from Jean-Jacques Levy's theory of families of redexes and labelled reduction: these give measures of the complexity of interactions between colliding beta reductions, where reducing either redex creates work for the other.
A relatively simple example of colliding reductions is:
let D = λx(x x); F= λf.(f (f y)); and I= λx.x in
(D (F I))
which has two beta-redexes and reduces to (y y): reduce either one of them by regular substitution and you will create two new redexes, each of which is related to a piece of structure in the original term.
Iterating Church numerals is good in the same way:
let T = λfx. f(f( x)) in
λfx.(T (T (T (T T))) f x)
(which reduces the Church numeral for 65 536), which generates a lot of colliding redexes.
Generally, applying higher-order terms to each other, regardless of whether they are "well-typed" or make obvious sense, is a good source of hard work that generates complex intermediate structure.
float pi = 3.14;
float (^piSquare)(void) = ^(void){ return pi * pi; };
float (^piSquare2)(void) = ^(void){ return pi * pi; };
[piSquare isEqualTo: piSquare2]; // -> want it to behave like -isEqualToString...
To expand on Laurent's answer.
A Block is a combination of implementation and data. For two blocks to be equal, they would need to have both the exact same implementation and have captured the exact same data. Comparison, thus, requires comparing both the implementation and the data.
One might think comparing the implementation would be easy. It actually isn't because of the way the compiler's optimizer works.
While comparing simple data is fairly straightforward, blocks can capture objects-- including C++ objects (which might actually work someday)-- and comparison may or may not need to take that into account. A naive implementation would simply do a byte level comparison of the captured contents. However, one might also desire to test equality of objects using the object level comparators.
Then there is the issue of __block variables. A block, itself, doesn't actually have any metadata related to __block captured variables as it doesn't need it to fulfill the requirements of said variables. Thus, comparison couldn't compare __block values without significantly changing compiler codegen.
All of this is to say that, no, it isn't currently possible to compare blocks and to outline some of the reasons why. If you feel that this would be useful, file a bug via http://bugreport.apple.com/ and provide a use case.
Putting aside issues of compiler implementation and language design, what you're asking for is provably undecidable (unless you only care about detecting 100% identical programs). Deciding if two programs compute the same function is equivalent to solving the halting problem. This is a classic consequence of Rice's Theorem: Any "interesting" property of Turing machines is undecidable, where "interesting" just means that it's true for some machines and false for others.
Just for fun, here's the proof. Assume we can create a function to decide if two blocks are equivalent, called EQ(b1, b2). Now we'll use that function to solve the halting problem. We create a new function HALT(M, I) that tells us if Turing machine M will halt on input I like so:
BOOL HALT(M,I) {
return EQ(
^(int) {return 0;},
^(int) {M(I); return 0;}
);
}
If M(I) halts then the blocks are equivalent, so HALT(M,I) returns YES. If M(I) doesn't halt then the blocks are not equivalent, so HALT(M,I) returns NO. Note that we don't have to execute the blocks -- our hypothetical EQ function can compute their equivalence just by looking at them.
We have now solved the halting problem, which we know is not possible. Therefore, EQ cannot exist.
I don't think this is possible. Blocks can be roughly seen as advanced functions (with access to global or local variables). The same way you cannot compare functions' content, you cannot compare blocks' content.
All you can do is to compare their low-level implementation, but I doubt that the compiler will guarantee that two blocks with the same content share their implementation.