I'd like to implement the following naive (first order) finite differencing function:
finite_difference :: Fractional a => a -> (a -> a) -> a -> a
finite_difference h f x = ((f $ x + h) - (f x)) / h
As you may know, there is a subtle problem: one has to make sure that (x + h) and x differ by an exactly representable number. Otherwise, the result has a huge error, leveraged by the fact that (f $ x + h) - (f x) involves catastrophic cancellation (and one has to carefully choose h, but that is not my problem here).
In C or C++, the problem can be solved like this:
volatile double temp = x + h;
h = temp - x;
and the volatile modifier disables any optimization pertaining to the variable temp, so we are assured that a "clever" compiler will not optimize away those two lines.
I don't know enough Haskell yet to know how to solve this. I'm afraid that
let temp = x + h
hh = temp - x
in ((f $ x + hh) - (f x)) / h
will get optimized away by Haskell (or the backend it uses). How do I get the equivalent of volatile here (if possible without sacrificing laziness)? I don't mind GHC specific answers.
I have two solutions and a suggestion:
First solution: You can guarantee that this won't be optimized out with two helper functions and the NOINLINE pragma:
norm1 x h = x+h
{-# NOINLINE norm1 #-}
norm2 x tmp = tmp-x
{-# NOINLINE norm2 #-}
normh x h = norm2 x (norm1 x h)
This will work, but will introduce a small cost.
Second solution: Write the normalization function in C using volatile and call it through the FFI. The performance penalty will be minimal.
Now for the suggestion: Currently the math isn't optimized out, so it will work properly at present. You're afraid it will break in a future compiler. I think this is unlikely, but not so unlikely that I wouldn't want to guard against it also. So write some unit tests that cover the cases in question. Then if it does break in the future (for any reason), you'll know exactly why.
One way is to look at the Core.
Specializing to Doubles (which will be the case most likely to trigger some optimization):
finite_difference :: Double -> (Double -> Double) -> Double -> Double
finite_difference h f x = ((f $ x + hh) - (f x)) / h
where
temp = x + h
hh = temp - x
Is compiled to:
A.$wfinite_difference h f x =
case f (case x of
D# x' -> D# (+## x' (-## (+## x' h) x'))
) of
D# x'' -> case f x of D# y -> /## (-## x'' y) h
And similarly (with even less rewriting) for the polymorphic version.
So while the variables are inlined, the math isn't optimized away.
Beyond looking at the Core, I can't think of a way to guarantee the property you want.
I don't think that
temp = unsafePerformIO $ return $ x + h
would get optimised out. Just a guess.
Related
Let's say we want to check the parity of an Int:
data Parity = Even | Odd
It would be pretty easy for Nat:
parity0: Nat -> Parity
parity0 Z = Even
parity0 (S k) = case parity0 k of
Even => Odd
Odd => Even
The first attempt to implement that for Int:
parity1: Int -> Parity
parity1 x = if mod x 2 == 0 then Even else Odd
This function is not total:
Main.parity1 is possibly not total due to: Prelude.Interfaces.Int implementation of Prelude.Interfaces.Integral
It makes sense because mod is not total for Int. (Although I'm not sure how I could know it in advance. The REPL shows that mod is total. Apparently, you can use a partial function to implement a total function of an interface? Strange.)
Next, I try to use DivBy view:
parity2: Int -> Parity
parity2 x with (divides x 2)
parity2 ((2 * div) + rem) | (DivBy prf) =
if rem == 0 then Even else Odd
This function works and is total, but the implementation is error-prone and doesn't scale to cases where we have multiple possible values. I'd like to assert that rem can only be 0 or 1. So I attempt to use case on rem:
parity3: Int -> Parity
parity3 x with (divides x 2)
parity3 ((2 * div) + rem) | (DivBy prf) = case rem of
0 => Even
1 => Odd
This function also works but is not total. How can I use prf provided by DivBy to convince the compiler that it's total? How can I use this prf in general?
Would using Integer or some other type make this problem easier to solve?
And there is another very concerning thing. I tried to case-split on prf and discovered that the following function is total:
parity4: Int -> Parity
parity4 x with (divides x 2)
parity4 ((2 * div) + rem) | (DivBy prf) impossible
Is that a bug? I can use this function to produce a runtime crash in a program that only contains total functions.
I need to rely on the fact that two Z3 variables
can not have the same name.
To be sure of that,
I've used tuple_example1() from test_capi.c in z3/examples/c and changed the original code from:
// some code before that ...
x = mk_real_var(ctx, "x");
y = mk_real_var(ctx, "y"); // originally y is called "y"
// some code after that ...
to:
// some code before that ...
x = mk_real_var(ctx, "x");
y = mk_real_var(ctx, "x"); // but now both x and y are called "x"
// some code after that ...
And (as expected) the output changed from:
tuple_example1
tuple_sort: (real, real)
prove: get_x(mk_pair(x, y)) = 1 implies x = 1
valid
disprove: get_x(mk_pair(x, y)) = 1 implies y = 1
invalid
counterexample:
y -> 0.0
x -> 1.0
to:
tuple_example1
tuple_sort: (real, real)
prove: get_x(mk_pair(x, y)) = 1 implies x = 1
valid
disprove: get_x(mk_pair(x, y)) = 1 implies y = 1
valid
BUG: unexpected result.
However, when I looked closer, I found out that Z3 did not really fail or anything, it is just a naive (driver) print out to console.
So I went ahead and wrote the exact same test with y being an int sort called "x".
To my surprise, Z3 could handle two variables with the same name when they have different sorts:
tuple_example1
tuple_sort: (real, real)
prove: get_x(mk_pair(x, y)) = 1 implies x = 1
valid
disprove: get_x(mk_pair(x, y)) = 1 implies y = 1
invalid
counterexample:
x -> 1.0
x -> 0
Is that really what's going on? or is it just a coincidence??
Any help is very much appreciated, thanks!
In general, SMT-Lib does allow repeated variable names, so long as they have different sorts. See page 27 of the standard. In particular, it says:
Concretely, a variable can be any symbol, while a function symbol
can be any identifier (i.e., a symbol or an indexed symbol). As a
consequence, contextual information is needed during parsing to know
whether an identifier is to be treated as a variable or a function
symbol. For variables, this information is provided by the three
binders which are the only mechanism to introduce variables. Function
symbols, in contrast, are predefined, as explained later. Recall that
every function symbol f is separately associated with one or more
ranks, each specifying the sorts of f’s arguments and result. To
simplify sort checking, a function symbol in a term can be annotated
with one of its result sorts σ. Such an annotated function symbol is a
qualified identifier of the form (as f σ).
Also on page 31 of the same document, it further clarifies "ambiguity" thusly:
Except for patterns in match expressions, every occurrence of an
ambiguous function symbol f in a term must occur as a qualified
identifier of the form (as f σ) where σ is the intended output sort of
that occurrence
So, in SMT-Lib lingo, you'd write like this:
(declare-fun x () Int)
(declare-fun x () Real)
(assert (= (as x Real) 2.5))
(assert (= (as x Int) 2))
(check-sat)
(get-model)
This produces:
sat
(model
(define-fun x () Int
2)
(define-fun x () Real
(/ 5.0 2.0))
)
What you are observing in the C-interface is essentially a rendering of the same. Of course, how much "checking" is enforced by the interface is totally solver specific as SMT-Lib says nothing about C API's or API's for other languages. That actually explains the BUG line you see in the output there. At this point, the behavior is entirely solver specific.
But long story short, SMT-Lib does indeed allow two variables have the same name used so long as they have different sorts.
Type-Driven Development with Idris presents:
twoPlusTwoNotFive : 2 + 2 = 5 -> Void
twoPlusTwoNotFive Refl impossible
Is the above a function or value? If it's the former, then why is there no variable arguments, e.g.
add1 : Int -> Int
add1 x = x + 1
In particular, I'm confused at the lack of = in twoPlusTwoNotFive.
impossible calls out combinations of arguments which are, well, impossible. Idris absolves you of the responsibility to provide a right-hand side when a case is impossible.
In this instance, we're writing a function of type (2 + 2 = 5) -> Void. Void is a type with no values, so if we succeed in implementing such a function we should expect that all of its cases will turn out to be impossible. Now, = has only one constructor (Refl : x = x), and it can't be used here because it requires ='s arguments to be definitionally equal - they have to be the same x. So, naturally, it's impossible. There's no way anyone could successfully call this function at runtime, and we're saved from having to prove something that isn't true, which would have been quite a big ask.
Here's another example: you can't index into an empty vector. Scrutinising the Vect and finding it to be [] tells us that n ~ Z; since Fin n is the type of natural numbers less than n there's no value a caller could use to fill in the second argument.
at : Vect n a -> Fin n -> a
at [] FZ impossible
at [] (FS i) impossible
at (x::xs) FZ = x
at (x::xs) (FS i) = at xs i
Much of the time you're allowed to omit impossible cases altogether.
I slightly prefer Agda's notation for the same concept, which uses the symbol () to explicitly pinpoint which bit of the input expression is impossible.
twoPlusTwoNotFive : (2 + 2 ≡ 5) -> ⊥
twoPlusTwoNotFive () -- again, no RHS
at : forall {n}{A : Set} -> Vec A n -> Fin n -> A
at [] ()
at (x ∷ xs) zero = x
at (x ∷ xs) (suc i) = at xs i
I like it because sometimes you only learn that a case is impossible after doing some further pattern matching on the arguments; when the impossible thing is buried several layers down it's nice to have a visual aid to help you spot where it was.
Well, I do recognized I'm puzzled with haskell and that this is my first weekend with it.
I just wonder if the following design of a OO-class Point2D
is supposed to be written in Haskell as follows:
import Prelude hiding ( sum )
-- ...............................................................
-- "class type" : types belonging to this family of types
-- must implement distance and sum functions
-- ...............................................................
class PointFamily p where
-- p is a type of this family, not a point
distance :: p -> p -> Float -- takes two things of type p and returns a Real
sum :: p -> p -> p -- takes two things of type p and returns a p thing
-- ...............................................................
-- data type: Point2D
-- a new type with x and y coordinates
-- ...............................................................
data Point2D = Point2D { x :: Float , y :: Float }
deriving (Show) -- it is "showable/printable"
-- ...............................................................
-- Point2D belongs to PointFamily, so let's say it and
-- how to compute distance and sum for this type
-- ...............................................................
instance PointFamily Point2D where
-- ............................................................-
distance p1 p2 = sqrt (dx*dx + dy*dy)
where
dx = (x p1) - (x p2)
dy = (y p1) - (y p2)
-- ............................................................-
sum p1 p2 = Point2D { x = (x p1)+(x p2), y = (y p1)+(y p2) }
-- ...............................................................
-- global constant
-- ...............................................................
origin = Point2D 0.0 0.0
-- ...............................................................
-- main
-- ...............................................................
main = do
putStrLn "Hello"
print b
print $ distance origin b
print $ sum b b
where
b = Point2D 3.0 4.0
Yes, I know I should not try do "think OOP" in Haskell, but ... well, 1) that's going to take a long time, and 2) in practice I guess you're gonna find several OOP designs to be rewriten in Haskell
First off: indeed, you should try not to "think OOP" in Haskell!
But your code isn't really OOP at all. It would be OO if you started to try virtual inheritance etc., but in this example it's more that the OO implementation happens to resemble the obvious Haskell implementation.
Only, it should be emphasised that the type class PointFamily really doesn't have any particular 1:1 relation with the data type Point2D, like their bundling in the OO class. You would in practise make instances of this class for any type where it conceivably works. Unsurprisingly, all of this has been done before; the most widespread equivalent of PointFamily is AffineSpace from the vector-spaces package. That is a lot more general, but has in principle much the same purpose.
Just to illustrate leftroundabout's point about not needing to think OO, I took the liberty of removing the typeclass, to show how straightforward the code can be. Don't vote for this if you currently need the ability to write code that works unmodified against 2D and 3D points. But I suspect what you really need right now is a 2D point and this code does that nicely. This is on the "You ain't gonna need it" principle. If, later, it turns out you do need it, there are several ways of introducing it.
I also added bang patterns on the x and y fields, since typical 2D applications usually want those fields to be strict.
import Prelude hiding ( sum )
data Point2D = Point2D { x :: !Float , y :: !Float }
deriving (Read,Show,Eq)
distance :: Point2D -> Point2D -> Float -- takes two things of type p and returns a Real
distance p1 p2 = sqrt (dx*dx + dy*dy)
where
dx = (x p1) - (x p2)
dy = (y p1) - (y p2)
sum :: Point2D -> Point2D -> Point2D -- takes two things of type p and returns a p thing
sum p1 p2 = Point2D { x = (x p1)+(x p2), y = (y p1)+(y p2) }
origin = Point2D 0.0 0.0
main = do
putStrLn "Hello"
print b
print $ distance origin b
print $ sum b b
where
b = Point2D 3.0 4.0
I'm curious how to optimize this code :
fun n = (sum l, f $ f0 l, g $ g0 l)
where l = map h [1..n]
Assuming that f, f0, g, g0, and h are all costly, but the creation and storage of l is extremely expensive.
As written, l is stored until the returned tuple is fully evaluated or garbage collected. Instead, length l, f0 l, and g0 l should all be executed whenever any one of them is executed, but f and g should be delayed.
It appears this behavior could be fixed by writing :
fun n = a `seq` b `seq` c `seq` (a, f b, g c)
where
l = map h [1..n]
a = sum l
b = inline f0 $ l
c = inline g0 $ l
Or the very similar :
fun n = (a,b,c) `deepSeq` (a, f b, g c)
where ...
We could perhaps specify a bunch of internal types to achieve the same effects as well, which looks painful. Are there any other options?
Also, I'm obviously hoping with my inlines that the compiler fuses sum, f0, and g0 into a single loop that constructs and consumes l term by term. I could make this explicit through manual inlining, but that'd suck. Are there ways to explicitly prevent the list l from ever being created and/or compel inlining? Pragmas that produce warnings or errors if inlining or fusion fail during compilation perhaps?
As an aside, I'm curious about why seq, inline, lazy, etc. are all defined to by let x = x in x in the Prelude. Is this simply to give them a definition for the compiler to override?
If you want to be sure, the only way is to do it yourself. For any given compiler version, you can try out several source-formulations and check the generated core/assembly/llvm byte-code/whatever whether it does what you want. But that could break with each new compiler version.
If you write
fun n = a `seq` b `seq` c `seq` (a, f b, g c)
where
l = map h [1..n]
a = sum l
b = inline f0 $ l
c = inline g0 $ l
or the deepseq version thereof, the compiler might be able to merge the computations of a, b and c to be performed in parallel (not in the concurrency sense) during a single traversal of l, but for the time being, I'm rather convinced that GHC doesn't, and I'd be surprised if JHC or UHC did. And for that the structure of computing b and c needs to be simple enough.
The only way to obtain the desired result portably across compilers and compiler versions is to do it yourself. For the next few years, at least.
Depending on f0 and g0, it might be as simple as doing a strict left fold with appropriate accumulator type and combining function, like the famous average
data P = P {-# UNPACK #-} !Int {-# UNPACK #-} !Double
average :: [Double] -> Double
average = ratio . foldl' count (P 0 0)
where
ratio (P n s) = s / fromIntegral n
count (P n s) x = P (n+1) (s+x)
but if the structure of f0 and/or g0 doesn't fit, say one's a left fold and the other a right fold, it may be impossible to do the computation in one traversal. In such cases, the choice is between recreating l and storing l. Storing l is easy to achieve with explicit sharing (where l = map h [1..n]), but recreating it may be difficult to achieve if the compiler does some common subexpression elimination (unfortunately, GHC does have a tendency to share lists of that form, even though it does little CSE). For GHC, the flags fno-cse and -fno-full-laziness can help avoiding unwanted sharing.