semOp l = \m -> case l of
LD g -> case m of
St xs -> St (g::xs)
_ -> Error
Just want to know what does the \m part is doing here.
\m -> ... is syntax for anonymous function, aka "lambda-expression".
For example, the following two declarations would be equivalent:
f x = x + 5
f = \x -> x + 5
Both define a function with one parameter x that returns a number greater than x by 5.
I wanted to add to Fyodor’s answer by saying that this is a slightly unusual way to write this - it’s making explicit that semOp l returns a lambda. The following are all valid and equivalent ways to write this function’s beginning:
semOp l m = case l of ..
semOp l = \m -> case l of ..
semOp = \l -> \m -> case l of ..
semOp = \l m -> case l of ..
You might be most used to the first version, but in some languages like Haskell these can have slightly difference performance impacts - but this isn’t something for you to worry about, particularly in Elm as far as I understand.
Related
I'm trying to generate a Seq of random options where consecutive repeats are not allowed:
> (<F R U>.roll(*).grep(* ne *))[^10]
((R F) (F U) (R F) (F R) (U R) (R F) (R F) (U F) (R U) (U R))
Why does using grep in this way result in nested pairs? And how can I achieve what I'm looking for? Using .flat above will still allow consecutive repeats.
( R U F U F U R F R U ... )
how can I achieve what I'm looking for?
I believe the squish method will do what you want:
say <F R U>.roll(*).squish.head(20)
This produces:
(U R F U F R F R F U F R U R F U R U R F)
Why does using grep in this way result in nested pairs?
Because grep (and map too) on something with an arity greater than 1 will work by breaking the list into chunks of that size. Thus given A B C D, the first call to the grep block gets A B, the second C D, and so on. What's really needed in this case, however, is a lagged value. One could use a state variable in the block passed to grep to achieve this, however for this problem squish is a much simpler solution.
You could employ a simple regex using the <same> zero-width matcher. Below in the Raku REPL:
> my $a = (<F R U>.roll(*))[^20].comb(/\w/).join; say $a;
RRFUURFFFFRRFRFUFUUU
> say $/ if $a ~~ m:g/ \w <!same> /;
(「R」 「F」 「U」 「R」 「F」 「R」 「F」 「R」 「F」 「U」 「F」 「U」)
https://docs.raku.org/language/regexes#Predefined_Regexes
I'm learning OCaml, and the docs and books I'm reading aren't very clear on some things.
In simple words, what are the differences between
-10
and
~-10
To me they seem the same. I've encountered other resources trying to explain the differences, but they seem to explain in terms that I'm not yet familiar with, as the only thing I know so far are variables.
In fact, - is a binary operator, so some expression can be ambigous : f 10 -20 is treated as (f 10) - 20. For example, let's imagine this dummy function:
let f x y = (x, y)
If I want produce the tuple (10, -20) I naïvely would write something like that f 10 -20 which leads me to the following error:
# f 10 -20;;
Error: This expression has type 'a -> int * 'a
but an expression was expected of type int
because the expression is evaluated as (f 10) - 20 (so a substract over a function!!) so you can write the expression like this: f 10 (-20), which is valid or f 10 ~-20 since ~- (and ~+ and ~-. ~+. for floats) are unary operators, the predecense is properly respected.
It is easier to start by looking at how user-defined unary (~-) operators work.
type quaternion = { r:float; i:float; j:float; k:float }
let zero = { r = 0.; i = 0.; j = 0.; k = 0. }
let i = { zero with i = 1. }
let (~-) q = { r = -.q.r; i = -.q.i; j = -. q.j; k = -. q.k }
In this situation, the unary operator - (and +) is a shorthand for ~- (and ~+) when the parsing is unambiguous. For example, defining -i with
let mi = -i
works because this - could not be the binary operator -.
Nevertheless, the binary operator has a higher priority than the unary - thus
let wrong = Fun.id -i
is read as
let wrong = (Fun.id) - (i)
In this context, I can either use the full form ~-
let ok' = Fun.id ~-i
or add some parenthesis
let ok'' = Fun.id (-i)
Going back to type with literals (e.g integers, or floats), for those types, the unary + and - symbol can be part of the literal itself (e.g -10) and not an operator. For instance redefining ~- and ~+ does not change the meaning of the integer literals in
let (~+) = ()
let (~-) = ()
let really = -10
let positively_real = +10
This can be "used" to create some quite obfuscated expression:
let (~+) r = { zero with r }
let (+) x y = { r = x.r +. y.r; i = x.i +. y.i; k = x.k +. x.k; j =x.k +. y.j }
let ( *. ) s q = { r = s *. q.r; i = s *. q.i; j = s *. q.j; k = s *. q.k }
let this_is_valid x = +x + +10. *. i
OCaml has two kinds of operators - prefix and infix. The prefix operators precede expressions and infix occur in between the two expressions, e.g., in !foo we have the prefix operator ! coming before the expression foo and in 2 + 3 we have the infix operator + between expressions 2 and 3.
Operators are like normal functions except that they have a different syntax for application (aka calling), whilst functions are applied to an arbitrary number of arguments using a simple syntax of juxtaposition, e.g., f x1 x2 x3 x41, operators can have only one (prefix) or two (infix) arguments. Prefix operators are very close to normal functions, cf., f x and !x, but they have higher precedence (bind tighter) than normal function application. Contrary, the infix operators, since they are put between two expressions, enable a more natural syntax, e.g., x + y vs. (+) x y, but have lower precedence (bind less tight) than normal function application. Moreover, they enable chaining several operators in a row, e.g., x + y + z is interpreted as (x + y) + z, which is much more readable than add (add (x y) z).
Operators in OCaml distinguished purely syntactically. It means that the kind of an operator is fully defined by the first character of that operator, not by a special compiler directive, like in some other languages (i.e., there is not infix + directive in OCaml). If an operator starts with the prefix-symbol sequence, e.g., !, ?#, ~%, it is considered as prefix and if it starts with an infix-symbol then it is, correspondingly, an infix operator.
The - and -. operators are treated specially and can appear both as prefix and infix. E.g., in 1 - -2 we have - occurring both in the infix and prefix positions. However, it is only possible to disambiguate between the infix and the prefix versions of the - (and -.) operators when they occur together with other operators (infix or prefix), but when we have a general expression, the - operator is treated as infix. E.g., max 0 -1 is interpreted as (max 0) - 1 (remember that operator has lower precedence than function application, therefore when they two appear with no parentheses then functions are applied first and operators after that). Another example, Some -1, which is interpreted as Some - 1, not as Some (-1). To disambiguate such code, you can use either the parentheses, e.g., max 0 (-1), or the prefix-only versions, e.g, Some ~-1 or max 0 ~-1.
As a matter of personal style, I actually prefer parentheses as it is usually hard to keep these rules in mind when you read the code.
1) Purists will say that functions in OCaml can have only one argument and f x1 x2 x3 x4 is just ((f x1) x2) x3) x4, which would be a totally correct statement, but a little bit irrelevant to the current discussion.
I find difficulties in constructing a Grammar for the language especially with linear grammar.
Can anyone please give me some basic tips/methodology where i can construct the grammar for any language ? thanks in advance
I have a doubt whether the answer for this question "Construct a linear grammar for the language: is right
L ={a^n b c^n | n belongs to Natural numbers}
Solution:
Right-Linear Grammar :
S--> aS | bA
A--> cA | ^
Left-Linear Grammar:
S--> Sc | Ab
A--> Aa | ^
As pointed out in the comments, these grammars are wrong since they generate strings not in the language. Here's a derivation of abcc in both grammars:
S -> aS -> abA -> abcA -> abccA -> abcc
S -> Sc -> Scc -> Abcc -> Aabcc -> abcc
Also as pointed out in the comments, there is a simple linear grammar for this language, where a linear grammar is defined as having at most one nonterminal symbol in the RHS of any production:
S -> aSc | b
There are some general rules for constructing grammars for languages. These are either obvious simple rules or rules derived from closure properties and the way grammars work. For instance:
if L = {a} for an alphabet symbol a, then S -> a is a gammar for L.
if L = {e} for the empty string e, then S -> e is a grammar for L.
if L = R U T for languages R and T, then S -> S' | S'' along with the grammars for R and T are a grammar for L if S' is the start symbol of the grammar for R and S'' is the start symbol of the grammar for T.
if L = RT for languages R and T, then S = S'S'' is a grammar for L if S' is the start symbol of the grammar for R and S'' is the start symbol of the grammar for T.
if L = R* for language R, then S = S'S | e is a grammar for L if S' is the start symbol of the grammar for R.
Rules 4 and 5, as written, do not preserve linearity. Linearity can be preserved for left-linear and right-linear grammars (since those grammars describe regular languages, and regular languages are closed under these kinds of operations); but linearity cannot be preserved in general. To prove this, an example suffices:
R -> aRb | ab
T -> cTd | cd
L = RT = a^n b^n c^m d^m, 0 < a,b,c,d
L' = R* = (a^n b^n)*, 0 < a,b
Suppose there were a linear grammar for L. We must have a production for the start symbol S that produces something. To produce something, we require a string of terminal and nonterminal symbols. To be linear, we must have at most one nonterminal symbol. That is, our production must be of the form
S := xYz
where x is a string of terminals, Y is a single nonterminal, and z is a string of terminals. If x is non-empty, reflection shows the only useful choice is a; anything else fails to derive known strings in the language. Similarly, if z is non-empty, the only useful choice is d. This gives four cases:
x empty, z empty. This is useless, since we now have the same problem to solve for nonterminal Y as we had for S.
x = a, z empty. Y must now generate exactly a^n' b^n' b c^m d^m where n' = n - 1. But then the exact same argument applies to the grammar whose start symbol is Y.
x empty, z = d. Y must now generate exactly a^n b^n c c^m' d^m' where m' = m - 1. But then the exact same argument applies to the grammar whose start symbol is Y.
x = a, z = d. Y must now generate exactly a^n' b^n' bc c^m' d^m' where n' and m' are as in 2 and 3. But then the exact same argument applies to the grammar whose start symbol is Y.
None of the possible choices for a useful production for S is actually useful in getting us closer to a string in the language. Therefore, no strings are derived, a contradiction, meaning that the grammar for L cannot be linear.
Suppose there were a grammar for L'. Then that grammar has to generate all the strings in (a^n b^n)R(a^m b^m), plus those in e + R. But it can't generate the ones in the former by the argument used above: any production useful for that purpose would get us no closer to a string in the language.
I'd like to implement the following naive (first order) finite differencing function:
finite_difference :: Fractional a => a -> (a -> a) -> a -> a
finite_difference h f x = ((f $ x + h) - (f x)) / h
As you may know, there is a subtle problem: one has to make sure that (x + h) and x differ by an exactly representable number. Otherwise, the result has a huge error, leveraged by the fact that (f $ x + h) - (f x) involves catastrophic cancellation (and one has to carefully choose h, but that is not my problem here).
In C or C++, the problem can be solved like this:
volatile double temp = x + h;
h = temp - x;
and the volatile modifier disables any optimization pertaining to the variable temp, so we are assured that a "clever" compiler will not optimize away those two lines.
I don't know enough Haskell yet to know how to solve this. I'm afraid that
let temp = x + h
hh = temp - x
in ((f $ x + hh) - (f x)) / h
will get optimized away by Haskell (or the backend it uses). How do I get the equivalent of volatile here (if possible without sacrificing laziness)? I don't mind GHC specific answers.
I have two solutions and a suggestion:
First solution: You can guarantee that this won't be optimized out with two helper functions and the NOINLINE pragma:
norm1 x h = x+h
{-# NOINLINE norm1 #-}
norm2 x tmp = tmp-x
{-# NOINLINE norm2 #-}
normh x h = norm2 x (norm1 x h)
This will work, but will introduce a small cost.
Second solution: Write the normalization function in C using volatile and call it through the FFI. The performance penalty will be minimal.
Now for the suggestion: Currently the math isn't optimized out, so it will work properly at present. You're afraid it will break in a future compiler. I think this is unlikely, but not so unlikely that I wouldn't want to guard against it also. So write some unit tests that cover the cases in question. Then if it does break in the future (for any reason), you'll know exactly why.
One way is to look at the Core.
Specializing to Doubles (which will be the case most likely to trigger some optimization):
finite_difference :: Double -> (Double -> Double) -> Double -> Double
finite_difference h f x = ((f $ x + hh) - (f x)) / h
where
temp = x + h
hh = temp - x
Is compiled to:
A.$wfinite_difference h f x =
case f (case x of
D# x' -> D# (+## x' (-## (+## x' h) x'))
) of
D# x'' -> case f x of D# y -> /## (-## x'' y) h
And similarly (with even less rewriting) for the polymorphic version.
So while the variables are inlined, the math isn't optimized away.
Beyond looking at the Core, I can't think of a way to guarantee the property you want.
I don't think that
temp = unsafePerformIO $ return $ x + h
would get optimised out. Just a guess.
I've read about it in a book but it wasn't explained at all. I also never saw it in a program. Is part of Prolog syntax? What's it for? Do you use it?
It represents implication. The righthand side is only executed if the lefthand side is true. Thus, if you have this code,
implication(X) :-
(X = a ->
write('Argument a received.'), nl
; X = b ->
write('Argument b received.'), nl
;
write('Received unknown argument.'), nl
).
Then it will write different things depending on it argument:
?- implication(a).
Argument a received.
true.
?- implication(b).
Argument b received.
true.
?- implication(c).
Received unknown argument.
true.
(link to documentation.)
It's a local version of the cut, see for example the section on control predicated in the SWI manual.
It is mostly used to implement if-then-else by (condition -> true-branch ; false-branch). Once the condition succeeds there is no backtracking from the true branch back into the condition or into the false branch, but backtracking out of the if-then-else is still possible:
?- member(X,[1,2,3]), (X=1 -> Y=a ; X=2 -> Y=b ; Y=c).
X = 1,
Y = a ;
X = 2,
Y = b ;
X = 3,
Y = c.
?- member(X,[1,2,3]), (X=1, !, Y=a ; X=2 -> Y=b ; Y=c).
X = 1,
Y = a.
Therefore it is called a local cut.
It is possible to avoid using it by writing something more wordy. If I rewrite Stephan's predicate:
implication(X) :-
(
X = a,
write('Argument a received.'), nl
;
X = b,
write('Argument b received.'), nl
;
X \= a,
X \= b,
write('Received unknown argument.'), nl
).
(Yeah I don't think there is any problem with using it, but my boss was paranoid about it for some reason, so we always used the above approach.)
With either version, you need to be careful that you are covering all cases you intend to cover, especially if you have many branches.
ETA: I am not sure if this is completely equivalent to Stephan's, because of backtracking if you have implication(X). But I don't have a Prolog interpreter right now to check.