Type Error in Alloy specification - requirements

In the Requirements Engineering (2007) article, "Requirement Progression In Problem Frames", there is a worked example on a traffic lights problem that I have transcribed into the Alloy editor. Unfortunately, I get the following error when testing the code.
Starting the solver...
A type error has occurred:
This must be a set or relation.
Instead, it has the following possible type(s):
{PrimitiveBoolean}
The error is triggered by the following predicate:
pred LightUnitBreadcrumb [] {
all t: Time |
NGObserve [t] <=>
odd [NGPulse [t]] and
SGObserve [t] <=>
odd [SGPulse [t]] }
referencing the NGPulse predicate below:
sig NGP, SGP, NRP, SRP in Time {}
pred NGPulse [t: Time] {t in NGP}
pred SGPulse [t: Time] {t in SGP}
pred NRPulse [t: Time] {t in NRP}
pred SRPulse [t: Time] {t in SRP}

My guess is that a set-valued expression is expected between the square brackets of odd instead of calls to the NGPulse or SGPulse predicates. Indeed, predicates are boolean-valued and not set/relation valued expressions, hence the error.

Related

What are partial and unsafe operators?

While reading the Pony Tutorial on Operations, I noticed that some infix operators have partial and unsafe versions. Coming from C#, Java, Python, and JS/TS, I haven't the faintest clue what those do.
In C# there are checked and unchecked contexts for arithmetic. In a checked block, math that would result in a overflow throws an exception. Are the unsafe operators related to that?
Can someone please explain unsafe and partial operators?
The regular operators, such as add/+, mod/%%, etc. will always return the most sensible result. This leads to certain results, such as division by zero being equal to 0. This is because these functions are mathematically considered non-partial, meaning that for every input, there is a defined output; even if the output may be unusual, like an addition that overflows having its result being smaller than the inputs.
However, there are certain situations where having a clearly defined result for every input is not what the programmer wants. That's where unsafe and partial operators come in.
Since these functions must return defined results (see the division by zero example above), they have to run both extra and expensive instructions for that guarantee. The unsafe version of the operators removes these guarantees, and uses a quicker CPU instruction that may give unexpected results for certain inputs. This is useful when you know that your inputs cannot reach certain conditions (eg. overflow, division by zero), and you want your code to squeeze out some extra performance. From the documented definition of the add_unsafe/+~ and mod_unsafe/%%~ operators in trait Integer¹, for example:
fun add_unsafe(y: A): A =>
"""
Unsafe operation.
If the operation overflows, the result is undefined.
"""
fun mod_unsafe(y: A): A
"""
Calculates the modulo of this number after floored division by `y`.
Unsafe operation.
If y is 0, the result is undefined.
If the operation overflows, the result is undefined.
"""
Alternatively, you may want to detect these conditions that would return mathematically-wrong results in your code at runtime, meaning that the functions do not return a value for every set of inputs, and are therefore partial. These will raise errors that you can handle as usual. Also reading the documentation of the add_partial/+? and mod_partial/%%? operators in trait Integer¹, we find:
fun add_partial(y: A): A ?
"""
Add y to this number.
If the operation overflows this function errors.
"""
fun mod_partial(y: A): A ?
"""
Calculates the modulo of this number and `y` after floored division (`fld`).
The result has the sign of the divisor.
If y is `0` or the operation overflows, this function errors.
"""
¹ This trait is implemented by all integer types in Pony, both signed and unsigned.

How to remove elements from a vector in a fast way in Clojure?

I'm trying to remove elements from a Clojure vector:
Note that I'm using Clojure's operations from Kotlin
val set = PersistentHashSet.create("foo")
val vec = PersistentVector.create("foo", "bar")
val seq = clojure.`core$remove`.invokeStatic(set, vec) as ISeq
val resultVec = clojure.`core$vec`.invokeStatic(seq) as PersistentVector
This is the equivalent of the following Clojure code:
(remove #{"foo"} ["foo" "bar"])
The code works fine but I've noticed that creating a vector from the seq is extrmely slow. I've written a benchmark and these were the results:
| Item count | Remove ms | Remove with converting back to vector ms|
-----------------------------------------------------------------
| 1000 | 51 | 1355 |
| 10000 | 71 | 5123 |
Do you know how I can convert the seq resulting from the remove operation back to a vector without the harsh performance penalty?
If it is not possible is there an alternative way to perform the remove operation?
You could try the complementary operation to remove that returns a vector:
(filterv (complement #{"foo"})
["foo" "bar"])
Note the use of filterv. The v indicates that it uses a vector from the start, and returns a vector, so no conversion is required. It uses a transient vector behind the scenes, so it should be pretty fast.
I'm negating the predicate using complement so I can use filterv, since there is no removev. remove is just defined as the complement of filter anyway though, so it's basically what you were already doing, just strict.
What you are trying to do fundamentally performs badly. Vectors are for fast indexed read/write, and O(1) access to the right end. To do anything else you must tear the vector apart and rebuild it again, an O(N) operation. If you need an operation like this to be efficient, you must use a different data structure.
Why not a PersistentHashSet? Fast removal, though not ordered. I do vaguely recall Clojure also having a sorted set in case that’s needed.
You have made an error of accepting the lazy result of remove as equivalent to the concrete result of converting back to a vector. Compare the lazy result of (remove ...) with the concrete result implied by (count (remove ...)). You will see that it is slightly slower than just doing (vec (remove ...)). Also, for real speed-critical applications, there is nothing like using a native Java ArrayList:
(ns tst.demo.core
(:require
[criterium.core :as crit] )
(:import [java.util ArrayList]))
(def N 1000)
(def tgt-item (/ N 2))
(def pred-set #{ (long tgt-item) })
(def data-vec (vec (range N)))
(def data-al (ArrayList. data-vec))
(def tgt-items (ArrayList. [tgt-item]))
(println :lazy)
(crit/quick-bench
(remove pred-set data-vec))
(println :lazy-count)
(crit/quick-bench
(count (remove pred-set data-vec)))
(println :vec)
(crit/quick-bench
(vec (remove pred-set data-vec)))
(println :ArrayList)
(crit/quick-bench
(let [changed? (.removeAll data-al tgt-items)]
data-al))
with results:
:lazy Evaluation count : 35819946 time mean : 10.856 ns
:lazy-count Evaluation count : 8496 time mean : 69941.171 ns
:vec Evaluation count : 9492 time mean : 62965.632 ns
:ArrayList Evaluation count : 167490 time mean : 3594.586 ns

Idiomatic way of listing elements of a sum type in Idris

I have a sum type representing arithmetic operators:
data Operator = Add | Substract | Multiply | Divide
and I'm trying to write a parser for it. For that, I would need an exhaustive list of all the operators.
In Haskell I would use deriving (Enum, Bounded) like suggested in the following StackOverflow question: Getting a list of all possible data type values in Haskell
Unfortunately, there doesn't seem to be such a mechanism in Idris as suggested by Issue #19. There is some ongoing work by David Christiansen on the question so hopefully the situation will improve in the future : david-christiansen/derive-all-the-instances
Coming from Scala, I am used to listing the elements manually, so I pretty naturally came up with the following:
Operators : Vect 4 Operator
Operators = [Add, Substract, Multiply, Divide]
To make sure that Operators contains all the elements, I added the following proof:
total
opInOps : Elem op Operators
opInOps {op = Add} = Here
opInOps {op = Substract} = There Here
opInOps {op = Multiply} = There (There Here)
opInOps {op = Divide} = There (There (There Here))
so that if I add an element to Operator without adding it to Operators, the totality checker complains:
Parsers.opInOps is not total as there are missing cases
It does the job but it is a lot of boilerplate.
Did I miss something? Is there a better way of doing it?
There is an option of using such feature of the language as elaborator reflection to get the list of all constructors.
Here is a pretty dumb approach to solving this particular problem (I'm posting this because the documentation at the moment is very scarce):
%language ElabReflection
data Operator = Add | Subtract | Multiply | Divide
constrsOfOperator : Elab ()
constrsOfOperator =
do (MkDatatype _ _ _ constrs) <- lookupDatatypeExact `{Operator}
loop $ map fst constrs
where loop : List TTName -> Elab ()
loop [] =
do fill `([] : List Operator); solve
loop (c :: cs) =
do [x, xs] <- apply `(List.(::) : Operator -> List Operator -> List Operator) [False, False]
solve
focus x; fill (Var c); solve
focus xs
loop cs
allOperators : List Operator
allOperators = %runElab constrsOfOperator
A couple comments:
It seems that to solve this problem for any inductive datatype of a similar structure one would need to work through the Elaborator Reflection: Extending Idris in Idris paper.
Maybe the pruviloj library has something that might make solving this problem for a more general case easier.

Well typed and ill typed lambda terms

I have been trying to understand the applied lambda calculus. Up till now, I have understood how type inference works. But I am not able to follow what is the meaning of saying that a term is well-typed or ill-typed and then how can I determine whether a given term is well-typed or ill-typed.
For example, consider a lambda term tw defined as λx[(x x)] . How to conclude whether it is a well-typed or ill-typed term?
If we are talking about Simply Typed Lambda Calculus with some additional constants and basic types (i.e. applied lambda calculus), then the term λx:σ. (x x) is well-formed, but ill-typed.
'Well-formed' means syntactically correct, i.e. will be accepted by a parser for STLC. 'Ill-typed' means the type-checker would not pass it further.
Type-checker works according to the typing rules, which are usually expressed as a number of typing judgements (one typing scheme for each syntactic form).
Let me show that the term you provided is indeed ill-typed.
According to the rule (3) [see the typing rules link], λx:σ. (x x) must have type of general form σ -> τ (since it is a function, or more correctly abstraction). But that means the body (x x) must have some type τ (assuming x : σ). This is basically the same rule (3) expressed in a natural language. So, now we need to figure out the type of the function's body, which is an application.
Now, the rule for application (4) says that if we have an expression like this (e1 e2), then e1 must be some function e1 : α -> β and e2 : α must be an argument of the right type. Let's apply this rule to our expression for the body (x x). (1) x : α -> β and (2) x : α. Since an term in STLC can have only one type, we've got an equation: α -> β = α.
But there is no way we can unify both types together, since α is a subpart of α -> β. That's why this won't typecheck.
By the way, one of the major points of STLC was to forbid self-application (like (x x)), because it prevents from using (untyped) lambda calculus as a logic, since one can perform non-terminating calculations using self-application (see for instance Y-combinator).

How to change Racket expression to Float type following Optimization Coach suggestion?

I am writing a small numeric program, using Typed Racket. I want to improve its performance, and installed the Optimization Coach plugin in DrRacket. However, I am not able to follow its advice in the very first suggestion that it outputs.
The code is the following (you can see it in context here in Github):
(define: 𝛆 : Positive-Integer 20)
(define: base : Positive-Integer 10)
(define: N : Integer (* (+ n 𝛆) (exact-floor (/ (log base) (log 2)))))
and the Optimization Coach output is the following:
This seems simple enough, right? 2 can be changed to 2.0 and this yields an optimization (a less red color on the line), but it is base that I cannot touch without getting a TypeCheck error.
Defining or casting base as Float
(define: base : Float 10.0)
;; or
(log (cast base Float))
leads to:
❯ raco exe bellard.rkt
bellard.rkt:31:47: Type Checker: type mismatch
expected: Real
given: Number
in: (/ (log base) (log 2))
How can I perform this optimization? Any help is appreciated.
This is a bit silly, but I found the answer to my question in the paper that presents Optimization Coach, which I had read too hastily.
Unbeknownst to the programmer, however, this code suffers
from a special case in Racket’s treatment of mixed-type
arithmetic. Integer-float multiplication produces a floating
point number, unless the integer is 0, in which case the result
is the integer 0. Thus the result of the above multiplication
is a floating-point number most of the time, but not always,
making floating-point specialization unsafe
I supposed this also applied to mixed-type division, and changed my code to:
(define: N : Integer (* (+ n 𝛆) (exact-floor (/ (log (exact->inexact base))
(log 2.0)))))
The optimization is confirmed by the plugin with a green line.