Can Functor instance be declared with additional type restriction for function - frege

I'm working on porting GHC/Arr.hs into Frege.
Array is defined:
data Array i e = Array{u,l::i,n::Int,elems::(JArray e)}
There is function:
amap :: (Ix i, ArrayElem e) => (a -> b) -> Array i a -> Array i b
Now, I don't know how to define Functor instance for it, because
instance (Ix i) => Functor (Array i) where
fmap = amap
But compiler complains that inferred type is more constrained that expected, what seems true. Can I make Array an functor with restrction for functions ArrayElem -> ArrayElem?

No, this is not possible.
If you base Array on JArray and want a functor instance, you must not use any functions that arise the ArrayElem (or any other additional) context.
Another way to say this is that you cannot base Array on type safe java arrays, but must deal with java arrays of type Object[]. Because, as you have without doubt noted, the ArrayElem type class is just a trick to be able to provide the correct java type on creation of a java array. This is, of course, important for interfacing with Java and for performance reasons.
Note that there is another problem with type safe java arrays. Let's say we want to make an array of Double (but the same argument holds for any other element type). AFAIK, Haskell mandates that Arrays elements must be lazy. Hence we really cannot use the java type double[] (to which JArray Double would be the Frege counterpart) to model it. Because, if we would do this, every array element would have to be evaluated as soon as it is set.
For this reason, I suggest you use some general custom array element type, like
data AElem a = AE () a
mkAE = A ()
unAE (AE _ x) = x
derive ArrayElement AElem
and change your definition:
data Array i e = Array{u,l::i,n::Int,elems::(JArray (AElem e))}
Now, your functor instance can be written, because the ArrayElem constraint does not arise, because when you access the elems array, the compiler knows that you have AElem elements and can and will supply the correct instance.
In addition, construction of AElems and usage of AElems as actual array elements does not impose strictness on the actual value.
Needless to say, the user of the Array module should not (need to) know about those implementation details, that is, the AElem type.

Related

Why is the key type parameter of a Kotlin Map invariant?

The Map interface in Kotlin (using V1.6.21) has a signature of
interface Map<K, out V>
Why is K invariant instead of covariant (out K)?
The documentation of type parameter K says:
The map is invariant in its key type, as it can accept key as a parameter (of containsKey for example) and return it in keys set.
However, interface Set is covariant in the element type, so the the last part ("return it in keys set") is not applicable, at least not immediately.
Further, the type parameter K is used only at occurrences where the map state is not modified, for lookup purposes (methods containsKey, get, getOrDefault). At these places, isn't it safe to use #UnsafeVariance? After all, that same technique was employed to Map's value type parameter V, for example in containsValue, to allow making V covariant.
My guess would be that using a Map<KSubtype, V> as a Map<KSupertype, V> (where KSubtype : KSupertype) does not really make a lot of sense because the former, by construction, cannot contain entries with keys other than KSubtype.
So a proper implementation should return null from all calls to get(kSupertype) as well as return false from those to containsKey(kSupertype).
In the case of Set<out E> it's only the contains function that needs unsafe variance, and Map would also require unsafe variance on get. This might have been too much of a peculiarity to support, compared to the value of supporting the use case.

Modelling object-oriented program in Coq

I want to prove some facts about imperative object-oriented program. How can I represent a heterogeneous object graph in Coq? My main problem is that edges are implicit - each node consists of an integer label modelling object address and a data structure that models object state. So implicit edges are formed by fields inside data structure that model object pointers and contain address label of another node in a graph. To ensure that my graph is valid, adding new node to the graph must require a proof that all fields in a data structure that is being added refer to nodes that already exist in the graph. But how can I express 'all pointer fields in a data structure' in Coq?
It depends on how you represent a data structure, and what kinds of features the language you want to model has. Here's one possibility. Let's say that your language has two kinds of values: numbers and object references. We can write this type in Coq as:
Inductive value : Type :=
| VNum (n : nat)
| VRef (ref : nat).
A reference (or pointer) is just a natural number that can be used to uniquely identify objects on the heap. We can use functions to represent both objects and the heap as follows:
Definition object : Type := string -> option value.
Definition heap : Type := nat -> option object.
Paraphrasing in English, an object is a partial function from strings (which we use to model fields in the object) to values, and a heap is a partial function from nats (that is, object references) to objects. We can then express your property as:
Definition object_ok (o : object) (h : heap) : Prop :=
forall (s : string) (ref : nat),
o s = Some (VRef ref) ->
exists obj, h ref = Some obj.
Again, in English: if the field s of the object o is defined, and equal to a reference ref, then there exists some object obj stored at that address on the heap h.
The one problem with that representation is that Coq functions make it possible for heaps to have infinitely many objects, and objects to have infinitely many fields. You can circumvent this problem with an alternative representation that only allows for functions defined on finitely many inputs, such as lists of pairs, or (even better) a type of finite maps, such as this one.

Frege: can I derive "Show" for a recursive type?

I'm trying to implement the classical tree structure in frege, which works nicely as long as I don't use "derive":
data Tree a = Node a (Tree a) (Tree a)
| Empty
derive Show Tree
gives me
realworld/chapter3/E_Recursive_Types.fr:7: kind error,
type constructor `Tree` has kind *->*, expected was *
Is this not supported or do I have to declare it differently?
Welcome to the world of type kinds!
You must give the full type of the items you want to show. Tree is not a type (kind *), but something that needs a type parameter to become one (kind * -> *).
Try
derive Show (Tree a)
Note that this is shorthand for
derive Show (Show a => Tree a)
which resembles the fact that, to show a tree, you need to also know how to show the values in the tree (at least, the code generated by derive will need to know this - of course, one could write an instance manually that prints just the shape of the tree and so would not need it).
Generally, the kind needed in instances for every type class is fixed. The error message tells you that you need kind * for Show.
EDIT: eliminate another possible misconception
Note that this has nothing to do with your type being recursive. Let's take, for example, the definition of optional values:
data Maybe a = Nothing | Just a
This type is not recursive, and yet we still cannot say:
derive Show Maybe -- same kind error as above!!
But, given the following type class:
class ListSource c -- things we can make a list from
toList :: c a -> [a]
we need say:
instance ListSource Maybe where
toList (Just x) = [x]
toList Nothing = []
(instanceand derive are equivalent for the sake of this discussion, both make instances, the difference being that derive generates the instance functions automatically for certain type classes.)
It is, admittedly, not obvious why it is this way in one case and differntly in the other. The key is, in every case the type of the class operation we want to use. For example, in class Show we have:
class Show s where
show :: s -> String
Now, we see that the so called class type variable s (which represents any future instantiated type expression) appears on its own on the left of the function array. This, of course, indicates that s must be a plain type (kind *), because we pass a value to show and every value has per definition a type of kind *. We can have values of types Int or Maybe Int or Tree String, but no value ever has a type Maybe or Tree.
On the other hand, in the definition of ListSource, the class type variable c is applied to some other type variable a in the type of toList, which also appears as list element type. From the latter, we can conclude, that a has kind * (because list elements are values). We know, that the type to the left and to the right of a function arrow must have kind * also, since functions take and return values. Therefore, c a has kind *. Thus, c alone is something that, when applied to a type of kind * yields a type of kind *. This is written * -> *.
This means, in plain english, when we want to make an instance for ListSource we need the type constructor of some "container" type that is parameterized with another type. Tree and Maybe would be possible here, but not Int.

cannot define a constructor as a bound function

class A
constructor: ->
method: ->
In the above example, method is not bound to the class and neither is constructor.
class B
constructor: ->
method: =>
In this case, method is bound to the class. It behaves as you expect a normal object method to behave and has access to all of class B's fields. But the constructor is not bound? That seems strange. So i tried the following.
class C
constructor: =>
method: =>
This doesn't compile. I would expect the syntax to be the same on all methods that are bound to a class.
I would like to regard the -> operator as a static operator and the => operator as a dynamic operator. But it doesn't seem like you can. If you could, a method with the -> operator could not be called with super. But, in actuality, you can. Why does this make sense for the syntax of an object oriented language? This seems to not agree with most object oriented languages inheritance rules.
Try looking at how the code compiles. When you use =>, the methods are bound inside the constructor. Thus, it doesn't make any sense to use => for a constructor - when would it be bound?
I'm not sure about your issue with static vs. dynamic operators, but you can definitely call methods defined with the -> operator with super. The only thing -> vs => affects is that the => ensures that this is the object in question regardless of how it is called.
Summary of comments:
Calling the difference between -> and => analogous to static vs. dynamic (or virtual) does not quite convey what those operators do. They are used to get different behavior from javascript's this variable. For example, look at the following code:
class C
constructor: ->
method1: ->
console.log this
method2: =>
console.log this
c = new C()
c.method1() //prints c
f = c.method1;f() //prints window
c.method2() //prints c
f = c.method2;f() //prints c
The difference is in the second way we call each method: if the method is not "bound" to the object, its this is set by looking at what precedes the method call (separated by a .). In the first case, this is c, but in the second f isn't being called on an object, so this is set to window. method2 doesn't have this problem because it is bound to the object.
Normally, you can think of the the constructor function automatically being bound to the object that it is constructing (thus, you can't bind it with =>). However, its worth noting that this isn't quite what's happening, because if a constructor returns an object, that will be the return value of the construction, rather than the this during the constructor.
I think you're massively confused as to the meaning of the '=>', or fat arrow.
First off though, your examples aren't actually valid coffeescript, are they? There is no -> after the class declaration. Adding one is a compiler error.
Back to the fat arrow, there's no mapping to the terms static and dynamic that I can think of, that would apply here. Instead the fat arrow is a convenient syntactic sugar for wrapping a function with a closure that contains the reference to the object you're calling the function on.
The C++ analog is to possibly to say that the fat arrow is a method for automatically creating a functor: it lets you give the function as a callback to a third party who can call it without knowing your object, but where the code invoked inside will have access to your object as the this pointer. It serves no other purpose, and has no bearing on whether a function can be overloaded, or whether it can have access to super.

Is every method returning `this` a monad?

Is every method on a class which returns this a monad?
I'm going to say a very cautious "possibly". A lot of this is contingent on your definitions.
It's worth noting that I'm taking the definition of monad from the category theory construct, not the functional programming construct.
If you think of a method A of class C that maps a C instance to another C instance (i.e. it returns this), then this would appear that C.A() is a functor from the category consisting of C instantiations to itself. Therefore it's an endofunctor, at least. It would appear that this construction obeys the basic identity and associativity properties that we expect, but further inspection would be required to say for sure.
Anyway, I wouldn't stake my life on it, and I'm not certain this is a very helpful way about thinking of such constructions, but it does seem a reasonable assumption on first inspection, at least.
I have limited understanding of monads. I can't tell if that meets the formal definition of a monad (I don't think so, but I don't know for sure), but return this; alone doesn't allow any of the cool things monads allow (fluid interfaces are nice, but not monads imho and nowhere as useful as even simple monads like the option type monad).
This snippet from wikipedia seems to say "no":
Formally, a monad is constructed by defining two operations (bind and return) and a type constructor M [... further restrictions we don't need here]
Edit: Moreover, a monad is a type and not an operation (e.g. method) - the question should rather read "Is a class a monad if all of its methods return this?"</nitpick >
Probably not, at least not in any of the usual ways.
Monads in programming are typically defined over a category of types with functions as arrows. In that case, a method returning this is an arrow from the class to itself--this is an endomorphism with the usual monoid of function composition, but is not a functor.
Note that functors involving function types are certainly possible, but a functor F(A) => (A -> A) doesn't really work because the type appears in both covariant and contravariant position, that is, given a function A -> B you can send A -> A to A -> B, or you can send B -> B to A -> B, but you can't get a B -> B from A -> A or vice versa.
However, there is one way to view instances as having monadic structure. Consider that instance methods effectively have this as an implicit argument. So for some class C, its methods are functions from C to whatever other type. This corresponds roughly to the covariant function functor above. Note that I'm not describing any particular class here, but the entire concept of classes and instances! So, for this mapping from C to instance methods of C:
If we have an instance method returning some type A and a function with type A -> B, we can trivially define a method returning something of type B: that's the rest of the functor definition, a.k.a. 'fmap` in Haskell.
If we have some value of type A, we can add a trivial instance method that just returns that value: that's the monad's "unit" operation, a.k.a. return in Haskell.
If we have an instance method returning a value of type A, and another instance method taking an argument of type A and returning a value of type B, we can define a method that simply returns a value of type B by combining them. That's the monadic bind, a.k.a. (>>=) in Haskell.
Haskell calls the monad of "functions that all take a first argument of some fixed type" the Reader Monad, and the do notation for it lets you write code where that first argument is implicitly available--rather like the way that this is implicitly available inside instance methods.
The difference here is that with class instances, the monadic structure is... sort of at the level of the syntax, not something you can use directly in a program, at least not in most languages.
In my opinion, No.
There are at least two issues I see with it.
A monad is often a glue between two functions. In this case methodA returns a type on which the next methodB is invoked, (and of course methodA and methodB both belonging to the same type).
A monad is supposed to allow type transformations. So if functionA returns TypeX and functionB expects TypeY, the monad needs to provide a bind operation which can convert a Monad(TypeX) into a Monad(TypeY). The monad then goes on to take the return value of the first function, wrap it as a Monad(TypeX), transform it to Monad(TypeY) from which TypeY would get extracted and fed into functionB.
A method which returns this is actually an implementation of Fluent Interface. And while many have argued it to be a monadic as well, I would only say that while it helps resolve problems similar to what monads could otherwise solve, and while the solution would seem similar to how a monadic solution might work (instead of the "." operator, the bind method of the monad has to be invoked without any explicit do block), it is not a monad. In other words it may walk like a monad and talk like a monad, but it is not a monad.
Slight Correction to point 2: The monad needs to provide mechanisms to a) convert TypeX into Monad(TypeX), transform from Monad(TypeX) to Monad(TypeY) and a coercion from Monad(TypeY) to TypeY