Related
I would like to make a function that given a function type (e.g. String -> Nat -> Bool), would return a list of types corresponding to that function type (e.g. [String, Nat, Bool]). Presumably the signature of such a function would be Type -> List Type, but I am struggling to determine how it would be implemented.
I don't believe it could be done in general, because you cannot patter-match on functions. Neither can you check for the type of a function. That is not what dependent types are about. Just like in Haskell or OCaml the only thing you can actually do with a function is apply it to some argument. However, I devised some trick which might do:
myFun : {a, b : Type} -> (a -> b) -> List Type
myFun {a} {b} _ = [a, b]
Now the problem is that a -> b is the only signature that would match any arbitrary function. But, of course it does not behave the way you'd like for functions with arity higher than one:
> myFun (+)
[Integer, Integer -> Integer] : List Type
So some sort of recursive call to itself would be necessary to extract more argument types:
myFun : {a, b : Type} -> (a -> b) -> List Type
myFun {a} {b} _ = a :: myFun b
The problem here is that b is an arbitrary type, not necessarily a function type and there is no way I can figure out to dynamically check whether it is a function or not, so I suppose this is as much as you can do with Idris.
However, dynamic checking for types (at least in my opinion) is not a feature to be desired in a statically typed language. After all the whole point of static typing is to specify in advance what kind of arguments a function can handle and prevent calling functions with invalid arguments at compile time. So basically you probably don't really need it at all. If you specified what you grander goal was, someone would likely have shown you the right way of doing it.
I can write a lambda expression outside of parenthesis, but I cannot put it there by name. I have tried many ways:
val plus3: (Int,Int,Int)->Int = {a,b,c->a+b+c}
println(apply3(1,2,3){a,b,c->a+b+c}) // OK
println(apply3(1,2,3){plus3}) // Type mismatch. Required: Int, Found: (Int,Int,Int)->Int
println(apply3(1,2,3){(plus3)}) // Type mismatch. Required: Int, Found: (Int,Int,Int)->Int
println(apply3(1,2,3)plus3) // unresolved reference
println(apply3(1,2,3){plus3()}) // value captured in a closure
println(apply3(1,2,3){(plus3)()}) // value captured in a closure
What is the syntax to put a name there (outside of parenthesis)?
I don't know why, but in the documentation there is not a word on the theme. It says we could put lambda there, but not a word about a variable or constant that denotes that lambda.
I don't know why, but in the documentation there is not a word on the theme.
Yes, there is:
In Kotlin, there is a convention that if the last parameter to a function is a function, and you're passing a lambda expression as the corresponding argument, you can specify it outside of parentheses
plus3 is an identifier and not a lambda expression, so you can't specify it outside of parentheses.
The type of plus3 is (Int,Int,Int->Int). The same as of {a,b,c->a+b+c}. Look again at the messages that I am getting from Kotlin compiler.
You mean the error messages when you pass { plus3 }? By Kotlin rules { plus3 } is a lambda which ignores its argument (if any) and returns plus3. So the rule applies, and apply3(1,2,3){plus3} means the same as apply3(1,2,3,{plus3}).
It sees plus3 as Int.
Exactly the opposite: it expects to see an Int as the return value of the lambda and sees plus3 which is (Int,Int,Int) -> Int.
So, the problem here is not of the high philosophical nature, but seems pure syntactic.
That was exactly my point: the rule is purely syntactic, it's applied before the compiler knows anything about type or value of plus3, and so it doesn't know or care whether this value happens to be a lambda.
The rule could instead say
In Kotlin, there is a convention that if the last parameter to a function has a function type, you can specify it outside of parentheses
in which case apply3(1,2,3) plus3 would work. But it doesn't.
Placing a lambda expression outside of a function call's parentheses is the same as placing it inside the parentheses like this:
println(apply3(1, 2, 3, { a, b, c -> a + b + c }))
From here, we can simply assign the lambda to a val (as you have done) which results in:
val plus3: (Int, Int, Int) -> Int = { a, b, c -> a + b + c }
println(apply3(1, 2, 3, plus3))
In a project I'm working on with a colleague, we are usign the UrlParser module and we stumbled in this error:
The type annotation for ourParser does not match its definition.
The type annotation is saying:
UrlParser.Parser a a
But I am inferring that the definition has this type:
UrlParser.Parser (String -> ∞) (String -> ∞)
Hint: A type annotation is too generic. You can probably just switch
to the type I inferred. These issues can be subtle though, so read
more about it.
Our code is something like
ourParser : UrlParser.Parser a a
ourParser =
UrlParser.oneOf
[ UrlParser.s "home"
, UrlParser.s "detail" </> UrlParser.string
]
The main question is: what is this ∞ symbol? Where is it defined? If I try to copy/paste it in my function definition I get a syntax error, as if Elm actually doesn't know what that character is...
The following question is: how such an error happens with my code?
The second parser in your list of alternatives combines
UrlParser.s "detail" : Parser a a
UrlParser.string : Parser (String -> b) b
using
(</>) : Parser u v -> Parser v w -> Parser u w
As you can hopefully see, the following types must match up:
u ~ a
v ~ a
v ~ (String -> b)
w ~ b
The resulting type is
UrlParser.s "detail" </> UrlParser.string : Parser (String -> b) b
The first parser in your list of alternatives has type
UrlParser.s "home" : Parser c c
Because you're building a list of these, they must have the same general type. As such, c ~ (String -> b), but also c ~ b. What you have here is a loop, resulting in an infinite type. That is what the infinity symbol means.
The error text is indeed misleading, because infinite types are not supported in Elm's type system (because they make no sense). This sounds like a bug, as Elm should explain that infinite types always point to a programming mistake.
The documentation for oneOf shows how parsers of different types could be combined through the use of format.
In any case, you need to turn your first parser into something of the type Parser (String -> c) c. From the types, it looks like applying format "some string" to the first parser would already suffice, but I don't know enough about Elm or the UrlParser to give any guarantees about that.
I have a a data type in idris:
data L3 = Rejected | Unproven | Proven
which I verified to be a ring with unity, a lattice, a group and some other properties too.
Now I want to create an object, which preserves the expressions of the statements I inject in it. I started out with four categories to represent all the operations, so I get a nice syntax tree out of it. Eg:
Om [Proven, Unproven, Op [Proven, Oj [Unproven, Proven]]
This is not the real representation, I stripped some of the needed ugly parts, but it gives an idea of what I try to achieve, the above is equivalent to:
meet Proven (meet Unproven (Proven <+> (join Unproven Proven)))
I recognized I could join the data types together into one. To get there I created a function, which will pick the correct class instance:
%case data Operator = Join | Meet | Plus | Mult
classChoice : (x: Operator) -> (Type -> Type)
classChoice Join = VerifiedJoinSemilattice
classChoice Meet = VerifiedMeetSemilattice
classChoice Plus = VerifiedGroup
classChoice Mult = VerifiedRing
So I could assure that anything in the type represents one of those four operations:
%elim data LogicSyntacticalCategory : classChoice op a => (op : Operator) -> (a : Type) -> Type where
LSCEmpty : LogicSyntacticalCategory op a
It will complain with:
When elaborating type of logicCategory.LSCEmpty:
Can't resolve type class classChoice op ty
Now my question: How can I assure that the objects in my data type are verified and join the four separate data types into one. I really would like to ensure this is true during construction. I can understand it has difficulties resolving the type class now, but I want Idris to ensure it can do it later during construction. How can I do this?
Code isn't really needed, I am quite happy with a direction of thought.
Two minor problems first: ... -> a -> ... should be ... -> (a : Type) -> ..., and syntactical is how it's written.
Warning: I'm working with Idris 0.9.18 and don't know how to write Elab proofs yet.
Repository: https://github.com/runKleisli/idris-classdata
In normal functions with these same type signatures, you have the opportunity to assist the type class resolution with tactics while defining the functions. But with the data type and its constructors, you only have the opportunity to declare them, so you have no such opportunity to assist in resolution. It would appear such guided resolution was needed here.
It appears that classChoice op a needs an instance proved before the LogicSyntacticleCategory op a in the definition of LSCEmpty makes sense, and that it did not get this instance. Class constraints in the data type's type like this are usually automatically introduced into the context of the constructor, like an implicit argument, but this seems to have failed here, and an instance is assumed for a different type than the one required. That instance assumed for the constructor not satisfying the goal introduced by declaring a LogicSyntacticleCategory op a seems to be the error. In one of the examples in the repository, these unexpectedly mismatched goal and assumption seem able to automatically pair, but not under the circumstances of the data type & constructor declarations. I can't figure out the exact problem, but it seems not to apply to plain function declarations with the same conditions on the type signature.
A couple solutions are given in the repository, but the easiest one is to replace the constraint argument, saying an instance of classChoice op a is required, with an implicit argument of type classChoice op a, and to evaluate LogicSyntacticleCategory like
feat : Type
feat = ?feat'
feat' = proof
exact (LogicSyntacticleCategory Mult ZZ {P=%instance})
If you are set on having a constraint argument in your main interface to the data type, you can wrap the definition of LogicSyntacticleCategory : (op : Operator) -> (a : Type) -> {p : classChoice op a} -> Type with the function
logicSyntacticleCategory : classChoice op a => (op : Operator) -> (a : Type) -> Type
logicSyntacticleCategory = ?mkLogical
mkLogical = proof
intros
exact (LogicSyntacticleCategory op a {P=constrarg})
and when you want to make a type of the form LogicSyntacticleCategory op a, evaluate like before, but with
feat' = proof
exact (logicSyntacticleCategory Mult ZZ)
exact Mult
exact ZZ
compute
exact inst -- for the named instance (inst) of (classChoice Mult ZZ)
where the last line is dropped for anonymous instances.
I know the language exists, but I can't put my finger on it.
dynamic scope
and
static typing?
We can try to reason about what such a language might look like. Obviously something like this (using a C-like syntax for demonstration purposes) cannot be allowed, or at least not with the obvious meaning:
int x_plus_(int y) {
return x + y; // requires that x have type int
}
int three_plus_(int y) {
double x = 3.0;
return x_plus_(y); // calls x_plus_ when x has type double
}
So, how to avoid this?
I can think of a few approaches offhand:
Commenters above mention that Fortran pre-'77 had this behavior. That worked because a variable's name determined its type; a function like x_plus_ above would be illegal, because x could never have an integer type. (And likewise one like three_plus_, for that matter, because y would have the same restriction.) Integer variables had to have names beginning with i, j, k, l, m, or n.
Perl uses syntax to distinguish a few broad categories of variables, namely scalars vs. arrays (regular arrays) vs. hashes (associative arrays). Variables belonging to the different categories can have the exact same name, because the syntax distinguishes which one is meant. For example, the expression foo $foo, $foo[0], $foo{'foo'} involves the function foo, the scalar $foo, the array #foo ($foo[0] being the first element of #foo), and the hash %foo ($foo{'foo'} being the value in %foo corresponding to the key 'foo'). Now, to be quite clear, Perl is not statically typed, because there are many different scalar types, and these are types not distinguished syntactically. (In particular: all references are scalars, even references to functions or arrays or hashes. So if you use the syntax to dereference a reference to an array, Perl has to check at runtime to see if the value really is an array-reference.) But this same approach could be used for a bona fide type system, especially if the type system were a very simple one. With that approach, the x_plus_ method would be using an x of type int, and would completely ignore the x declared by three_plus_. (Instead, it would use an x of type int that had to be provided from whatever scope called three_plus_.) This could either require some type annotations not included above, or it could use some form of type inference.
A function's signature could indicate the non-local variables it uses, and their expected types. In the above example, x_plus_ would have the signature "takes one argument of type int; uses a calling-scope x of type int; returns a value of type int". Then, just like how a function that calls x_plus_ would have to pass in an argument of type int, it would also have to provide a variable named x of type int — either by declaring it itself, or by inheriting that part of the type-signature (since calling x_plus_ is equivalent to using an x of type int) and propagating this requirement up to its callers. With this approach, the three_plus_ function above would be illegal, because it would violate the signature of the x_plus_ method it invokes — just the same as if it tried to pass a double as its argument.
The above could just have "undefined behavior"; the compiler wouldn't have to explicitly detect and reject it, but the spec wouldn't impose any particular requirements on how it had to handle it. It would be the responsibility of programmers to ensure that they never invoke a function with incorrectly-typed non-local variables.
Your professor was presumably thinking of #1, since pre-'77 Fortran was an actual real-world language with this property. But the other approaches are interesting to think about. :-)
I haven't found elsewhere has it written down, but AXIOM CAS (and various forks, including FriCAS which is still been actively developed) uses a script language called SPAD with both a very novel strong static dependent type system and dynamic scoping (although it is possibly an unintended implementation bug).
Most of the time the user won't realize that, but when they start trying to build closures like other functional languages it reveals its dynamic scoping nature:
FriCAS Computer Algebra System
Version: FriCAS 2021-03-06
Timestamp: Mon May 17 10:43:08 CST 2021
-----------------------------------------------------------------------------
Issue )copyright to view copyright notices.
Issue )summary for a summary of useful system commands.
Issue )quit to leave FriCAS and return to shell.
-----------------------------------------------------------------------------
(1) -> foo (x,y) == x + y
Type: Void
(2) -> foo (1,2)
Compiling function foo with type (PositiveInteger, PositiveInteger)
-> PositiveInteger
(2) 3
Type: PositiveInteger
(3) -> foo
(3) foo (x, y) == x + y
Type: FunctionCalled(foo)
(4) -> bar x y == x + y
Type: Void
(5) -> bar
(5) bar x == y +-> x + y
Type: FunctionCalled(bar)
(6) -> (bar 1)
Compiling function bar with type PositiveInteger ->
AnonymousFunction
(6) y +-> #1 + y
Type: AnonymousFunction
(7) -> ((bar 1) 2)
(7) #1 + 2
Type: Polynomial(Integer)
Such a behavior is similar to what will happen when trying to build a closure by using (lambda (x) (lambda (y) (+ x y))) in a dynamically scoped Lisp, such as Emacs Lisp. Actually the underlying representation of functions is essentially the same as Lisp in the early days since AXIOM has been first developed on top of an early Lisp implementation on IBM mainframe.
I believe it is however a defect (like what JMC happened did when implementing the first version of LISP language) because the implementor made the parser to do uncurrying as in the function definition of bar, but it is unlikely to be useful without the ability to build the closure in the language.
It is also worth notice that SPAD automatically renames the variables in anomalous functions to avoid captures so its dynamic scoping could be used as a feature as in other Lisps.
Dynamic scope means, that the variable and its type in a specific line of your code depends on the functions, called before. This means, you can not know the type in a specific line of your code, because you can not know, which code has been executed before.
Static typing means, that you have to know the type in every line of your code, before the code starts to run.
This is irreconcilable.