In elm, how can I pipe a value to a function as an argument other than the last one? - elm

I'm new to elm and to be honest struggling a bit to get my head around certain concepts right now. I'm not sure how clear my question is but this is what I'm trying to do.
for example:
aFunction value1 value2
is equivalent to:
value2
|> aFunction value1
but what if I want to pass value1 to aFunction through a pipe instead of value2?
at the moment I'm using something like this:
value1
|> (\x y -> aFunction y x) value2
However, it strikes me as a bit of a kludge, to be honest. Is there a more elegant way to do this?
What I'm trying to code in practice is part of quite a long chain of pipes which would be impractical (or at least unreadable) to do using an expression with lots of parentheses.

Use the flip function (which is just the function you are defining in-line with the lambda expression):
value1 |> (flip aFunction) value2

As per #chepner's answer, flip seems to be the best way to do this for functions with 2 arguments.
Given that there doesn't seem to be any built in solution to the wider question of how to do this for functions having more than 2 arguments, I came up with the following.
I'm really not sure if this is likely to be particularly useful in real life, but it was (at least) an interesting exercise for me as I'm really still learning fairly basic stuff:
module RotArgs exposing (..)
rot3ArgsL1 func = \y z x -> func x y z
rot3ArgsL2 func = \z x y -> func x y z
rot4ArgsL1 func = \x y z w -> func w x y z
rot4ArgsL2 func = \y z w x -> func w x y z
rot4ArgsL3 func = \z w x y -> func w x y z
So, for example rot3ArgsL1 takes a 3-argument function and returns a version of that function in which the expected order of arguments has been rotated 1 place to the left... and so forth.
It would enable things like:
import RotArgs exposing (..)
someFunction one two three =
...whatever
...
one
|> (rot3ArgsL1 someFunction) two three
Obviously RotArgs could easily be extemded to functions of more than 4 arguments, though that seems likely to be increasingly less useful the more arguments are added.

Related

What is eq_rect and where is it defined in Coq?

From what I have read, eq_rect and equality seem deeply interlinked. Weirdly, I'm not able to find a definition on the manual for it.
Where does it come from, and what does it state?
If you use Locate eq_rect you will find that eq_rect is located in Coq.Init.Logic, but if you look in that file there is no eq_rect in it. So, what's going on?
When you define an inductive type, Coq in many cases automatically generates 3 induction principles for you, appending _rect, _rec, _ind to the name of the type.
To understand what eq_rect means you need its type,
Check eq_rect.
here we go:
eq_rect
: forall (A : Type) (x : A) (P : A -> Type),
P x -> forall y : A, x = y -> P y
and you need to understand the notion of Leibniz's equality:
Leibniz characterized the notion of equality as follows:
Given any x and y, x = y if and only if, given any predicate P, P(x) if and only if P(y).
In this law, "P(x) if and only if P(y)" can be weakened to "P(x) if P(y)"; the modified law is equivalent to the original, since a statement that applies to "any x and y" applies just as well to "any y and x".
Speaking less formally, the above quotation says that if x and y are equal, their "behavior" for every predicate is the same.
To see more clearly that Leibniz's equality directly corresponds to eq_rect we can rearrange the order of parameters of eq_rect into the following equivalent formulation:
eq_rect_reorder
: forall (A : Type) (P : A -> Type) (x y : A),
x = y -> P x -> P y

Unique variable names in Z3

I need to rely on the fact that two Z3 variables
can not have the same name.
To be sure of that,
I've used tuple_example1() from test_capi.c in z3/examples/c and changed the original code from:
// some code before that ...
x = mk_real_var(ctx, "x");
y = mk_real_var(ctx, "y"); // originally y is called "y"
// some code after that ...
to:
// some code before that ...
x = mk_real_var(ctx, "x");
y = mk_real_var(ctx, "x"); // but now both x and y are called "x"
// some code after that ...
And (as expected) the output changed from:
tuple_example1
tuple_sort: (real, real)
prove: get_x(mk_pair(x, y)) = 1 implies x = 1
valid
disprove: get_x(mk_pair(x, y)) = 1 implies y = 1
invalid
counterexample:
y -> 0.0
x -> 1.0
to:
tuple_example1
tuple_sort: (real, real)
prove: get_x(mk_pair(x, y)) = 1 implies x = 1
valid
disprove: get_x(mk_pair(x, y)) = 1 implies y = 1
valid
BUG: unexpected result.
However, when I looked closer, I found out that Z3 did not really fail or anything, it is just a naive (driver) print out to console.
So I went ahead and wrote the exact same test with y being an int sort called "x".
To my surprise, Z3 could handle two variables with the same name when they have different sorts:
tuple_example1
tuple_sort: (real, real)
prove: get_x(mk_pair(x, y)) = 1 implies x = 1
valid
disprove: get_x(mk_pair(x, y)) = 1 implies y = 1
invalid
counterexample:
x -> 1.0
x -> 0
Is that really what's going on? or is it just a coincidence??
Any help is very much appreciated, thanks!
In general, SMT-Lib does allow repeated variable names, so long as they have different sorts. See page 27 of the standard. In particular, it says:
Concretely, a variable can be any symbol, while a function symbol
can be any identifier (i.e., a symbol or an indexed symbol). As a
consequence, contextual information is needed during parsing to know
whether an identifier is to be treated as a variable or a function
symbol. For variables, this information is provided by the three
binders which are the only mechanism to introduce variables. Function
symbols, in contrast, are predefined, as explained later. Recall that
every function symbol f is separately associated with one or more
ranks, each specifying the sorts of f’s arguments and result. To
simplify sort checking, a function symbol in a term can be annotated
with one of its result sorts σ. Such an annotated function symbol is a
qualified identifier of the form (as f σ).
Also on page 31 of the same document, it further clarifies "ambiguity" thusly:
Except for patterns in match expressions, every occurrence of an
ambiguous function symbol f in a term must occur as a qualified
identifier of the form (as f σ) where σ is the intended output sort of
that occurrence
So, in SMT-Lib lingo, you'd write like this:
(declare-fun x () Int)
(declare-fun x () Real)
(assert (= (as x Real) 2.5))
(assert (= (as x Int) 2))
(check-sat)
(get-model)
This produces:
sat
(model
(define-fun x () Int
2)
(define-fun x () Real
(/ 5.0 2.0))
)
What you are observing in the C-interface is essentially a rendering of the same. Of course, how much "checking" is enforced by the interface is totally solver specific as SMT-Lib says nothing about C API's or API's for other languages. That actually explains the BUG line you see in the output there. At this point, the behavior is entirely solver specific.
But long story short, SMT-Lib does indeed allow two variables have the same name used so long as they have different sorts.

Understanding 'impossible'

Type-Driven Development with Idris presents:
twoPlusTwoNotFive : 2 + 2 = 5 -> Void
twoPlusTwoNotFive Refl impossible
Is the above a function or value? If it's the former, then why is there no variable arguments, e.g.
add1 : Int -> Int
add1 x = x + 1
In particular, I'm confused at the lack of = in twoPlusTwoNotFive.
impossible calls out combinations of arguments which are, well, impossible. Idris absolves you of the responsibility to provide a right-hand side when a case is impossible.
In this instance, we're writing a function of type (2 + 2 = 5) -> Void. Void is a type with no values, so if we succeed in implementing such a function we should expect that all of its cases will turn out to be impossible. Now, = has only one constructor (Refl : x = x), and it can't be used here because it requires ='s arguments to be definitionally equal - they have to be the same x. So, naturally, it's impossible. There's no way anyone could successfully call this function at runtime, and we're saved from having to prove something that isn't true, which would have been quite a big ask.
Here's another example: you can't index into an empty vector. Scrutinising the Vect and finding it to be [] tells us that n ~ Z; since Fin n is the type of natural numbers less than n there's no value a caller could use to fill in the second argument.
at : Vect n a -> Fin n -> a
at [] FZ impossible
at [] (FS i) impossible
at (x::xs) FZ = x
at (x::xs) (FS i) = at xs i
Much of the time you're allowed to omit impossible cases altogether.
I slightly prefer Agda's notation for the same concept, which uses the symbol () to explicitly pinpoint which bit of the input expression is impossible.
twoPlusTwoNotFive : (2 + 2 ≡ 5) -> ⊥
twoPlusTwoNotFive () -- again, no RHS
at : forall {n}{A : Set} -> Vec A n -> Fin n -> A
at [] ()
at (x ∷ xs) zero = x
at (x ∷ xs) (suc i) = at xs i
I like it because sometimes you only learn that a case is impossible after doing some further pattern matching on the arguments; when the impossible thing is buried several layers down it's nice to have a visual aid to help you spot where it was.

":=" and "=>" in Mercury

I recently came across this code example in Mercury:
append(X,Y,Z) :-
X == [],
Z := Y.
append(X,Y,Z) :-
X => [H | T],
append(T,Y,NT),
Z <= [H | NT].
Being a Prolog programmer, I wonder: what's the difference between a normal unification =
and the := or => which are use here?
In the Mercury reference, these operators get different priorities, but they don't explain the difference.
First let's re-write the code using indentation:
append(X, Y, Z) :-
X == [],
Z := Y.
append(X, Y, Z) :-
X => [H | T],
append(T, Y, NT),
Z <= [H | NT].
You seem to have to indent all code by four spaces, which doesn't seem to work in comments, my comments above should be ignored (I'm not able to delete them).
The code above isn't real Mercury code, it is pseudo code. It doesn't make sense as real Mercury code because the <= and => operators are used for typeclasses (IIRC), not unification. Additionally, I haven't seen the := operator before, I'm not sure what is does.
In this style of pseudo code (I believe) that the author is trying to show that := is an assignment type of unification where X is assigned the value of Y. Similarly => is showing a deconstruction of X and <= is showing a construction of Z. Also == shows an equality test between X and the empty list. All of these operations are types of unification. The compiler knows which type of unification should be used for each mode of the predicate. For this code the mode that makes sense is append(in, in, out)
Mercury is different from Prolog in this respect, it knows which type of unification to use and therefore can generate more efficient code and ensure that the program is mode-correct.
One more thing, the real Mercury code for this pseudo code would be:
:- pred append(list(T)::in, list(T)::in, list(T)::out) is det.
append(X, Y, Z) :-
X = [],
Z = Y.
append(X, Y, Z) :-
X = [H | T],
append(T, Y, NT),
Z = [H | NT].
Note that every unification is a = and a predicate and mode declaration has been added.
In concrete Mercury syntax the operator := is used for field updates.
Maybe we are not able to use such operators like ':=' '<=' '=>' '==' in recent Mercury release, but actually these operators are specialized unification, according to the description in Nancy Mazur's thesis.
'=>' stands for deconstruction e.g. X => f(Y1, Y2, ..., Yn) where X is input and all Yn is output. It's semidet. '<=' is on the contrary, and is det. '==' is used in the situation where both sides are ground, and it is semidet. ':=' is just like regular assigning operator in any other language, and it's det. In older papers I even see that they use '==' instead of '=>' to perform a deconstruction. (I think my English is awful = x =)

Finite difference in Haskell, or how to disable potential optimizations

I'd like to implement the following naive (first order) finite differencing function:
finite_difference :: Fractional a => a -> (a -> a) -> a -> a
finite_difference h f x = ((f $ x + h) - (f x)) / h
As you may know, there is a subtle problem: one has to make sure that (x + h) and x differ by an exactly representable number. Otherwise, the result has a huge error, leveraged by the fact that (f $ x + h) - (f x) involves catastrophic cancellation (and one has to carefully choose h, but that is not my problem here).
In C or C++, the problem can be solved like this:
volatile double temp = x + h;
h = temp - x;
and the volatile modifier disables any optimization pertaining to the variable temp, so we are assured that a "clever" compiler will not optimize away those two lines.
I don't know enough Haskell yet to know how to solve this. I'm afraid that
let temp = x + h
hh = temp - x
in ((f $ x + hh) - (f x)) / h
will get optimized away by Haskell (or the backend it uses). How do I get the equivalent of volatile here (if possible without sacrificing laziness)? I don't mind GHC specific answers.
I have two solutions and a suggestion:
First solution: You can guarantee that this won't be optimized out with two helper functions and the NOINLINE pragma:
norm1 x h = x+h
{-# NOINLINE norm1 #-}
norm2 x tmp = tmp-x
{-# NOINLINE norm2 #-}
normh x h = norm2 x (norm1 x h)
This will work, but will introduce a small cost.
Second solution: Write the normalization function in C using volatile and call it through the FFI. The performance penalty will be minimal.
Now for the suggestion: Currently the math isn't optimized out, so it will work properly at present. You're afraid it will break in a future compiler. I think this is unlikely, but not so unlikely that I wouldn't want to guard against it also. So write some unit tests that cover the cases in question. Then if it does break in the future (for any reason), you'll know exactly why.
One way is to look at the Core.
Specializing to Doubles (which will be the case most likely to trigger some optimization):
finite_difference :: Double -> (Double -> Double) -> Double -> Double
finite_difference h f x = ((f $ x + hh) - (f x)) / h
where
temp = x + h
hh = temp - x
Is compiled to:
A.$wfinite_difference h f x =
case f (case x of
D# x' -> D# (+## x' (-## (+## x' h) x'))
) of
D# x'' -> case f x of D# y -> /## (-## x'' y) h
And similarly (with even less rewriting) for the polymorphic version.
So while the variables are inlined, the math isn't optimized away.
Beyond looking at the Core, I can't think of a way to guarantee the property you want.
I don't think that
temp = unsafePerformIO $ return $ x + h
would get optimised out. Just a guess.