A simple object-oriented-class 'Point' in Haskell - oop

Well, I do recognized I'm puzzled with haskell and that this is my first weekend with it.
I just wonder if the following design of a OO-class Point2D
is supposed to be written in Haskell as follows:
import Prelude hiding ( sum )
-- ...............................................................
-- "class type" : types belonging to this family of types
-- must implement distance and sum functions
-- ...............................................................
class PointFamily p where
-- p is a type of this family, not a point
distance :: p -> p -> Float -- takes two things of type p and returns a Real
sum :: p -> p -> p -- takes two things of type p and returns a p thing
-- ...............................................................
-- data type: Point2D
-- a new type with x and y coordinates
-- ...............................................................
data Point2D = Point2D { x :: Float , y :: Float }
deriving (Show) -- it is "showable/printable"
-- ...............................................................
-- Point2D belongs to PointFamily, so let's say it and
-- how to compute distance and sum for this type
-- ...............................................................
instance PointFamily Point2D where
-- ............................................................-
distance p1 p2 = sqrt (dx*dx + dy*dy)
where
dx = (x p1) - (x p2)
dy = (y p1) - (y p2)
-- ............................................................-
sum p1 p2 = Point2D { x = (x p1)+(x p2), y = (y p1)+(y p2) }
-- ...............................................................
-- global constant
-- ...............................................................
origin = Point2D 0.0 0.0
-- ...............................................................
-- main
-- ...............................................................
main = do
putStrLn "Hello"
print b
print $ distance origin b
print $ sum b b
where
b = Point2D 3.0 4.0
Yes, I know I should not try do "think OOP" in Haskell, but ... well, 1) that's going to take a long time, and 2) in practice I guess you're gonna find several OOP designs to be rewriten in Haskell

First off: indeed, you should try not to "think OOP" in Haskell!
But your code isn't really OOP at all. It would be OO if you started to try virtual inheritance etc., but in this example it's more that the OO implementation happens to resemble the obvious Haskell implementation.
Only, it should be emphasised that the type class PointFamily really doesn't have any particular 1:1 relation with the data type Point2D, like their bundling in the OO class. You would in practise make instances of this class for any type where it conceivably works. Unsurprisingly, all of this has been done before; the most widespread equivalent of PointFamily is AffineSpace from the vector-spaces package. That is a lot more general, but has in principle much the same purpose.

Just to illustrate leftroundabout's point about not needing to think OO, I took the liberty of removing the typeclass, to show how straightforward the code can be. Don't vote for this if you currently need the ability to write code that works unmodified against 2D and 3D points. But I suspect what you really need right now is a 2D point and this code does that nicely. This is on the "You ain't gonna need it" principle. If, later, it turns out you do need it, there are several ways of introducing it.
I also added bang patterns on the x and y fields, since typical 2D applications usually want those fields to be strict.
import Prelude hiding ( sum )
data Point2D = Point2D { x :: !Float , y :: !Float }
deriving (Read,Show,Eq)
distance :: Point2D -> Point2D -> Float -- takes two things of type p and returns a Real
distance p1 p2 = sqrt (dx*dx + dy*dy)
where
dx = (x p1) - (x p2)
dy = (y p1) - (y p2)
sum :: Point2D -> Point2D -> Point2D -- takes two things of type p and returns a p thing
sum p1 p2 = Point2D { x = (x p1)+(x p2), y = (y p1)+(y p2) }
origin = Point2D 0.0 0.0
main = do
putStrLn "Hello"
print b
print $ distance origin b
print $ sum b b
where
b = Point2D 3.0 4.0

Related

Unique variable names in Z3

I need to rely on the fact that two Z3 variables
can not have the same name.
To be sure of that,
I've used tuple_example1() from test_capi.c in z3/examples/c and changed the original code from:
// some code before that ...
x = mk_real_var(ctx, "x");
y = mk_real_var(ctx, "y"); // originally y is called "y"
// some code after that ...
to:
// some code before that ...
x = mk_real_var(ctx, "x");
y = mk_real_var(ctx, "x"); // but now both x and y are called "x"
// some code after that ...
And (as expected) the output changed from:
tuple_example1
tuple_sort: (real, real)
prove: get_x(mk_pair(x, y)) = 1 implies x = 1
valid
disprove: get_x(mk_pair(x, y)) = 1 implies y = 1
invalid
counterexample:
y -> 0.0
x -> 1.0
to:
tuple_example1
tuple_sort: (real, real)
prove: get_x(mk_pair(x, y)) = 1 implies x = 1
valid
disprove: get_x(mk_pair(x, y)) = 1 implies y = 1
valid
BUG: unexpected result.
However, when I looked closer, I found out that Z3 did not really fail or anything, it is just a naive (driver) print out to console.
So I went ahead and wrote the exact same test with y being an int sort called "x".
To my surprise, Z3 could handle two variables with the same name when they have different sorts:
tuple_example1
tuple_sort: (real, real)
prove: get_x(mk_pair(x, y)) = 1 implies x = 1
valid
disprove: get_x(mk_pair(x, y)) = 1 implies y = 1
invalid
counterexample:
x -> 1.0
x -> 0
Is that really what's going on? or is it just a coincidence??
Any help is very much appreciated, thanks!
In general, SMT-Lib does allow repeated variable names, so long as they have different sorts. See page 27 of the standard. In particular, it says:
Concretely, a variable can be any symbol, while a function symbol
can be any identifier (i.e., a symbol or an indexed symbol). As a
consequence, contextual information is needed during parsing to know
whether an identifier is to be treated as a variable or a function
symbol. For variables, this information is provided by the three
binders which are the only mechanism to introduce variables. Function
symbols, in contrast, are predefined, as explained later. Recall that
every function symbol f is separately associated with one or more
ranks, each specifying the sorts of f’s arguments and result. To
simplify sort checking, a function symbol in a term can be annotated
with one of its result sorts σ. Such an annotated function symbol is a
qualified identifier of the form (as f σ).
Also on page 31 of the same document, it further clarifies "ambiguity" thusly:
Except for patterns in match expressions, every occurrence of an
ambiguous function symbol f in a term must occur as a qualified
identifier of the form (as f σ) where σ is the intended output sort of
that occurrence
So, in SMT-Lib lingo, you'd write like this:
(declare-fun x () Int)
(declare-fun x () Real)
(assert (= (as x Real) 2.5))
(assert (= (as x Int) 2))
(check-sat)
(get-model)
This produces:
sat
(model
(define-fun x () Int
2)
(define-fun x () Real
(/ 5.0 2.0))
)
What you are observing in the C-interface is essentially a rendering of the same. Of course, how much "checking" is enforced by the interface is totally solver specific as SMT-Lib says nothing about C API's or API's for other languages. That actually explains the BUG line you see in the output there. At this point, the behavior is entirely solver specific.
But long story short, SMT-Lib does indeed allow two variables have the same name used so long as they have different sorts.

Understanding 'impossible'

Type-Driven Development with Idris presents:
twoPlusTwoNotFive : 2 + 2 = 5 -> Void
twoPlusTwoNotFive Refl impossible
Is the above a function or value? If it's the former, then why is there no variable arguments, e.g.
add1 : Int -> Int
add1 x = x + 1
In particular, I'm confused at the lack of = in twoPlusTwoNotFive.
impossible calls out combinations of arguments which are, well, impossible. Idris absolves you of the responsibility to provide a right-hand side when a case is impossible.
In this instance, we're writing a function of type (2 + 2 = 5) -> Void. Void is a type with no values, so if we succeed in implementing such a function we should expect that all of its cases will turn out to be impossible. Now, = has only one constructor (Refl : x = x), and it can't be used here because it requires ='s arguments to be definitionally equal - they have to be the same x. So, naturally, it's impossible. There's no way anyone could successfully call this function at runtime, and we're saved from having to prove something that isn't true, which would have been quite a big ask.
Here's another example: you can't index into an empty vector. Scrutinising the Vect and finding it to be [] tells us that n ~ Z; since Fin n is the type of natural numbers less than n there's no value a caller could use to fill in the second argument.
at : Vect n a -> Fin n -> a
at [] FZ impossible
at [] (FS i) impossible
at (x::xs) FZ = x
at (x::xs) (FS i) = at xs i
Much of the time you're allowed to omit impossible cases altogether.
I slightly prefer Agda's notation for the same concept, which uses the symbol () to explicitly pinpoint which bit of the input expression is impossible.
twoPlusTwoNotFive : (2 + 2 ≡ 5) -> ⊥
twoPlusTwoNotFive () -- again, no RHS
at : forall {n}{A : Set} -> Vec A n -> Fin n -> A
at [] ()
at (x ∷ xs) zero = x
at (x ∷ xs) (suc i) = at xs i
I like it because sometimes you only learn that a case is impossible after doing some further pattern matching on the arguments; when the impossible thing is buried several layers down it's nice to have a visual aid to help you spot where it was.

How do I define the property of a function being its own inverse in Idris?

I want to be able to say, for a function of f with signature t->t, that for all x in t, f(f(x)) = x.
When I run this:
%default total
-- The type of parity values - either Even or Odd
data Parity = Even | Odd
-- Even is the opposite of Odd and Odd is the opposite of Even
opposite: Parity -> Parity
opposite Even = Odd
opposite Odd = Even
-- The 'opposite' function is it's own inverse
opposite_its_own_inverse : (p : Parity) -> opposite (opposite p) = p
opposite_its_own_inverse Even = Refl
opposite_its_own_inverse Odd = Refl
-- abstraction of being one's own inverse
IsItsOwnInverse : {t : Type} -> (f: t->t) -> Type
IsItsOwnInverse {t} f = (x: t) -> f (f x) = x
opposite_IsItsOwnInverse : IsItsOwnInverse {t=Parity} opposite
opposite_IsItsOwnInverse = opposite_its_own_inverse
I get this error message:
- + Errors (1)
`-- own_inverse_example.idr line 22 col 25:
When checking right hand side of opposite_IsItsOwnInverse with expected type
IsItsOwnInverse opposite
Type mismatch between
(p : Parity) ->
opposite (opposite p) = p (Type of opposite_its_own_inverse)
and
(x : Parity) -> opposite (opposite x) = x (Expected type)
Specifically:
Type mismatch between
opposite (opposite v0)
and
opposite (opposite v0)
Am I doing something wrong, or is that just a bug?
If I replace the last 'opposite_its_own_inverse' with '?hole', I get:
Holes
This buffer displays the unsolved holes from the currently-loaded code. Press
the [P] buttons to solve the holes interactively in the prover.
- + Main.hole [P]
`-- opposite : Parity -> Parity
-------------------------------------------------------
Main.hole : (x : Parity) -> opposite (opposite x) = x
The name for this property is an involution. Your type for this property is pretty good but I like writing it like so:
Involution : (t -> t) -> t -> Type
Involution f x = f (f x) = x
The first problem with your opposite_IsItsOwnInverse is that you haven't fully applied Involution so you haven't yet gotten a type. You also need apply a Parity so that Involution gives a Type, like so:
opposite_IsItsOwnInverse : Involution opposite p
That p is an implicit argument. Implicit arguments are implicitly created by lowercase identifiers in type signatures. This is like writing:
opposite_IsItsOwnInverse : {p : Parity} -> Involution opposite p
But this leads to another problem with the signature - opposite is also lowercase, so it's getting treated as an implicit argument! (This is why you get the confusing error message, Idris has created another variable called opposite) You have 2 possible solutions here: qualify the identifier, or use an identifier which starts with an uppercase letter.
I'll assume the module you're writing uses the default name of Main. The final type signature looks like:
opposite_IsItsOwnInverse : Involution Main.opposite p
And the implementation will just use the implicit argument and supply it to the function you've already written:
opposite_IsItsOwnInverse {p} = opposite_its_own_inverse p

Finite difference in Haskell, or how to disable potential optimizations

I'd like to implement the following naive (first order) finite differencing function:
finite_difference :: Fractional a => a -> (a -> a) -> a -> a
finite_difference h f x = ((f $ x + h) - (f x)) / h
As you may know, there is a subtle problem: one has to make sure that (x + h) and x differ by an exactly representable number. Otherwise, the result has a huge error, leveraged by the fact that (f $ x + h) - (f x) involves catastrophic cancellation (and one has to carefully choose h, but that is not my problem here).
In C or C++, the problem can be solved like this:
volatile double temp = x + h;
h = temp - x;
and the volatile modifier disables any optimization pertaining to the variable temp, so we are assured that a "clever" compiler will not optimize away those two lines.
I don't know enough Haskell yet to know how to solve this. I'm afraid that
let temp = x + h
hh = temp - x
in ((f $ x + hh) - (f x)) / h
will get optimized away by Haskell (or the backend it uses). How do I get the equivalent of volatile here (if possible without sacrificing laziness)? I don't mind GHC specific answers.
I have two solutions and a suggestion:
First solution: You can guarantee that this won't be optimized out with two helper functions and the NOINLINE pragma:
norm1 x h = x+h
{-# NOINLINE norm1 #-}
norm2 x tmp = tmp-x
{-# NOINLINE norm2 #-}
normh x h = norm2 x (norm1 x h)
This will work, but will introduce a small cost.
Second solution: Write the normalization function in C using volatile and call it through the FFI. The performance penalty will be minimal.
Now for the suggestion: Currently the math isn't optimized out, so it will work properly at present. You're afraid it will break in a future compiler. I think this is unlikely, but not so unlikely that I wouldn't want to guard against it also. So write some unit tests that cover the cases in question. Then if it does break in the future (for any reason), you'll know exactly why.
One way is to look at the Core.
Specializing to Doubles (which will be the case most likely to trigger some optimization):
finite_difference :: Double -> (Double -> Double) -> Double -> Double
finite_difference h f x = ((f $ x + hh) - (f x)) / h
where
temp = x + h
hh = temp - x
Is compiled to:
A.$wfinite_difference h f x =
case f (case x of
D# x' -> D# (+## x' (-## (+## x' h) x'))
) of
D# x'' -> case f x of D# y -> /## (-## x'' y) h
And similarly (with even less rewriting) for the polymorphic version.
So while the variables are inlined, the math isn't optimized away.
Beyond looking at the Core, I can't think of a way to guarantee the property you want.
I don't think that
temp = unsafePerformIO $ return $ x + h
would get optimised out. Just a guess.

preserving units for calculations in programming

I was wondering if there are any sweet languages that offer some sort of abstraction for "feet" vs "inches" or "cm" etc. I was considering doing something like the following in Java:
u(56).feet() + u(26).inches()
and be able to get something like
17.7292 meters as the result.
One possible approach is, when making a new value, immediately convert it to a "base" unit, like meters or something, so you can add them easily.
However, I would much rather have the ability to preserve units, so that something like
u(799.95555).feet() - u(76).feet()
returns
723.95555 feet
and not
243.826452 meters - 23.1648 meters = 220.661652 meters
//220.661652 meters to feet returns 723.955551 feet
Since this problem seems like it would be really common, is there any framework or even a programming language that exists that handles this elegantly?
I suppose I can just add the units as they are in my methods, adding matching units together and only converting in order to +-*/ [add/subtract/multiply/divide] when they are requested, which is great for adding and subtracting:
//A
{
this.inches = 36.2;
this.meters = 1;
}
//total length is 1.91948 m
if I add this to an object B with values
//B
{
this.inches = 0.8;
this.meters = 2;
}
//total length is 2.02032 m
and I get a new object that is
{
this.inches = 37;
this.meters = 3;
}
//total length is 3.9398 meters
which is totally awesome, I can convert it whenever I want no problem. But operations such as multiplication would fail ...
//A * B = 3.87796383 m^2
{
this.inches = 28.96;
this.meters = 2;
}
// ...but multiplying piece-wise and then adding
// gives you 2.01868383 m^2, assuming you make 2m*1m give you 2 m^2.
So all I really wanted to show with that example was that
( A1 + A2 ) * ( Z1 + Z2 ) is not ( A1 * Z1 ) + ( A2 * Z2 )
And I'm pretty sure this means one has to convert to a common unit if they want to multiply or divide.
The example was mostly to discourage the reflexive answer, that you add or subtract them piece-wise before converting at the last moment, since * and / will fail.
tl;dr: Are there any clever ways to preserve units in programming? Are there clever ways to name methods/routines such that it's easy for me to understand what I'm adding and subtracting, etc?
I know for a fact there is such a language, although I haven't used it myself.
It's called Frink.
It not only allows you to mix different units for the same dimension but also operate on several different physical measurements. The sample calculations on its site are a fun read. I particular like the Superman bit.
F# has language support for units of measure.
EDIT: See also How do F# Units of Measure work
Many functional languages allow creating types for this sort of unit preservation. In Haskell:
-- you need GeneralizedNewtypeDeriving to derive Num
newtype Feet = Feet {unFeet :: Float} deriving (Eq, Show, Num)
newtype Meters = Meters {unMeters :: Float} deriving (Eq, Show, Num)
Now each unit is its own type, and you can only perform operations on values of the same type:
*Main> let a1 = 1 :: Feet
*Main> let a2 = 2 :: Feet
*Main> let a3 = 3 :: Meters
*Main> a1+a2
Feet 3.0
*Main> a1+a3
<interactive>:1:3:
Couldn't match expected type `Feet' against inferred type `Meters'
In the second argument of `(+)', namely `a3'
In the expression: a1 + a3
In the definition of `it': it = a1 + a3
*Main>
Now you can create a conversion type class to convert to and from any measurement types
class LengthMeasure unit where
untype :: unit -> Float
toFeet :: unit -> Feet
toFeet = Feet . (* 3.2808) . untype . toMeters
toMeters :: unit -> Meters
toMeters = Meters . (* 0.3048) . untype . toFeet
instance LengthMeasure Feet where
untype = unFeet
toFeet = id
instance LengthMeasure Meters where
untype = unMeters
toMeters = id
Now we can freely convert between types:
*Main> a1+toFeet a3
Feet {unFeet = 10.842401}
Of course, packages to do this sort of thing are available in Haskell.
Since you're using Java already, maybe Scala or Clojure would offer similar capabilities?
JSR-275 might be relevant http://code.google.com/p/unitsofmeasure/
See Which jsr-275 units implementation should be used?
I have done a lot of work with Units and there isn't anything comprehensive. You can find a lot of partial utilities (I think there are some distributed with UNIXes). NIST was developing a units markup language but it's been at least a decade cooking.
To do this properly needs an ontology in which the units are defined and the rules for conversion. You also have to deal with prefixes.
If you stick with physical science (SI units) there are 7 (possibly 8) base unit-types and 22 named derived quantities. But there are an also infinote number of ways they can be combined. For example the rate of change of acceleration is called "jerk" by some. In principle you could have an indefinite number of derivatives.
Are currencies units? etc...