Ramda: condition statement issue - conditional-statements

I tried to set a condition statement using R.cond. Firstly, i got the array size according to the input (i.e. [1,2,3]), then check if the array size is greater / equal to the input size (i.e. 3). But i got an error message. I would like to know why the error occur and how to fix it, thanks.
R.cond([
[R.compose(R.gte, R.length), () => {console.log(1)}],
[R.T, () => {console.log(2)}]
])([1,2,3])(3)
Error message: R.cond(...)(...) is not a function

I think there are two separate issues here. First of all, because the vast majority of calls to cond-functions are unary, it does not curry the result.
So you cannot call it as (...)([1,2,3])(3). You will need to do (...)([1,2,3], 3).
But that won't fix the other issue.
compose (and its twin pipe) do take multiple arguments for their first call, but after that only a single argument is passed between them. So the only value being passed to gte is the result of length.
You can fix this in several ways. Perhaps the simplest is:
const fn = R.cond([
[(list, len) => list.length >= len, always(1)],
[R.T, always(2)]
]);
fn([1, 2], 3); //=> 2
fn([1, 2, 3], 3); //=> 1
(Note that I changed your console.log to a function that returns a value.)
If you want to make this points-free, you could switch to use Ramda's useWith like this:
const fn = R.cond([
[R.useWith(R.gte, [R.length, R.identity]), always(1)],
[R.T, always(2)]
]);
But as often happens, I think the introduction of arrow functions makes tools like useWith less helpful. I find the earlier version more readable.
You can see these in action on the Ramda REPL.

Related

Ramda Converge causing error with arity of function

I thought I had a reasonable handle on how converge works, but I've been staring at this problem and its criptic (to me), error message for a bit and nothing seems to be jumping out at me.
Minimal Example
const first = pipe(
inc,
num => compose(multiply(num), multiply(num))
)(9)(1);
console.log("First: ", first);
const second = pipe(
inc,
converge(compose, [multiply, multiply])
)(9)(1);
console.log("Second: ", second);
Output:
First: 100
Error: First argument to _arity must be a non-negative integer no greater than ten
Your understanding of converge is probably pretty close. This is certainly an understandable thing to try.
The issue here is that when converge runs, it creates a new function that will call each of the transformation functions and then pass those results into the main function. The arity of this return function is the maximum arity of those transformation functions.
Thus converge (compose, [multiply, multiply]) is roughly equivalent to curry ((a, b) => compose (multiply (a, b), multiply (a, b)), since the arity of multiply is 2. But that means that compose won't be called until we have already received both multiplicands.
You can alter this by changing the multiplication function you provide. This works:
pipe (
inc,
converge (compose, [mult, mult])
) (9) (1)
with this:
const mult = (a) => (b) => a * b
or this:
const mult = unary (multiply)
or this:
const mult = uncurryN (1, multiply)
I'm afraid that as useful as it often is, Ramda's magical currying can lead to occasional problems like this. If we ever get version 1.0 out the door, I will try to convince the rest of the Ramda team switch to simple currying for the next version. Or maybe that's for a different library altogether.
Ramda also desperately needs to work on its error messaging.
Some day.

How to use hyperoperators with Scalars that aren't really scalar?

I want to make a hash of sets. Well, SetHashes, since they need to be mutable.
In fact, I would like to initialize my Hash with multiple identical copies of the same SetHash.
I have an array containing the keys for the new hash: #keys
And I have my SetHash already initialized in a scalar variable: $set
I'm looking for a clean way to initialize the hash.
This works:
my %hash = ({ $_ => $set.clone } for #keys);
(The parens are needed for precedence; without them, the assignment to %hash is part of the body of the for loop. I could change it to a non-postfix for loop or make any of several other minor changes to get the same result in a slightly different way, but that's not what I'm interested in here.)
Instead, I was kind of hoping I could use one of Raku's nifty hyper-operators, maybe like this:
my %hash = #keys »=>» $set;
That expression works a treat when $set is a simple string or number, but a SetHash?
Array >>=>>> SetHash can never work reliably: order of keys in SetHash is indeterminate
Good to know, but I don't want it to hyper over the RHS, in any order. That's why I used the right-pointing version of the hyperop: so it would instead replicate the RHS as needed to match it up to the LHS. In this sort of expression, is there any way to say "Yo, Raku, treat this as a scalar. No, really."?
I tried an explicit Scalar wrapper (which would make the values harder to get at, but it was an experiment):
my %map = #keys »=>» $($set,)
And that got me this message:
Lists on either side of non-dwimmy hyperop of infix:«=>» are not of the same length while recursing
left: 1 elements, right: 4 elements
So it has apparently recursed into the list on the left and found a single key and is trying to map it to a set on the right which has 4 elements. Which is what I want - the key mapped to the set. But instead it's mapping it to the elements of the set, and the hyperoperator is pointing the wrong way for that combination of sizes.
So why is it recursing on the right at all? I thought a Scalar container would prevent that. The documentation says it prevents flattening; how is this recursion not flattening? What's the distinction being drawn?
The error message says the version of the hyperoperator I'm using is "non-dwimmy", which may explain why it's not in fact doing what I mean, but is there maybe an even-less-dwimmy version that lets me be even more explicit? I still haven't gotten my brain aligned well enough with the way Raku works for it to be able to tell WIM reliably.
I'm looking for a clean way to initialize the hash.
One idiomatic option:
my %hash = #keys X=> $set;
See X metaoperator.
The documentation says ... a Scalar container ... prevents flattening; how is this recursion not flattening? What's the distinction being drawn?
A cat is an animal, but an animal is not necessarily a cat. Flattening may act recursively, but some operations that act recursively don't flatten. Recursive flattening stops if it sees a Scalar. But hyperoperation isn't flattening. I get where you're coming from, but this is not the real problem, or at least not a solution.
I had thought that hyperoperation had two tests controlling recursing:
Is it hyperoperating a nodal operation (eg .elems)? If so, just apply it like a parallel shallow map (so don't recurse). (The current doc quite strongly implies that nodal can only be usefully applied to a method, and only a List one (or augmentation thereof) rather than any routine that might get hyperoperated. That is much more restrictive than I was expecting, and I'm sceptical of its truth.)
Otherwise, is a value Iterable? If so, then recurse into that value. In general the value of a Scalar automatically behaves as the value it contains, and that applies here. So Scalars won't help.
A SetHash doesn't do the Iterable role. So I think this refusal to hyperoperate with it is something else.
I just searched the source and that yields two matches in the current Rakudo source, both in the Hyper module, with this one being the specific one we're dealing with:
multi method infix(List:D \left, Associative:D \right) {
die "{left.^name} $.name {right.^name} can never work reliably..."
}
For some reason hyperoperation explicitly rejects use of Associatives on either the right or left when coupled with the other side being a List value.
Having pursued the "blame" (tracking who made what changes) I arrived at the commit "Die on Associative <<op>> Iterable" which says:
This can never work due to the random order of keys in the Associative.
This used to die before, but with a very LTA error about a Pair.new()
not finding a suitable candidate.
Perhaps this behaviour could be refined so that the determining factor is, first, whether an operand does the Iterable role, and then if it does, and is Associative, it dies, but if it isn't, it's accepted as a single item?
A search for "can never work reliably" in GH/rakudo/rakudo issues yields zero matches.
Maybe file an issue? (Update I filed "RFC: Allow use of hyperoperators with an Associative that does not do Iterable role instead of dying with "can never work reliably".)
For now we need to find some other technique to stop a non-Iterable Associative being rejected. Here I use a Capture literal:
my %hash = #keys »=>» \($set);
This yields: {a => \(SetHash.new("b","a","c")), b => \(SetHash.new("b","a","c")), ....
Adding a custom op unwraps en passant:
sub infix:« my=> » ($lhs, $rhs) { $lhs => $rhs[0] }
my %hash = #keys »my=>» \($set);
This yields the desired outcome: {a => SetHash(a b c), b => SetHash(a b c), ....
my %hash = ({ $_ => $set.clone } for #keys);
(The parens seem to be needed so it can tell that the curlies are a block instead of a Hash literal...)
No. That particular code in curlies is a Block regardless of whether it's in parens or not.
More generally, Raku code of the form {...} in term position is almost always a Block.
For an explanation of when a {...} sequence is a Hash, and how to force it to be one, see my answer to the Raku SO Is that a Hash or a Block?.
Without the parens you've written this:
my %hash = { block of code } for #keys
which attempts to iterate #keys, running the code my %hash = { block of code } for each iteration. The code fails because you can't assign a block of code to a hash.
Putting parens around the ({ block of code } for #keys) part completely alters the meaning of the code.
Now it runs the block of code for each iteration. And it concatenates the result of each run into a list of results, each of which is a Pair generated by the code $_ => $set.clone. Then, when the for iteration has completed, that resulting list of pairs is assigned, once, to my %hash.

How to find out what arguments DM functions should take?

Through trial and error, I have found that the GetPixel function takes two arguments, one for X and one for Y, even if used on a 1D image. On a 1D image, the second index must be set to zero.
image list := [3]: {1,2,3}
list.GetPixel(0,0) // Gets 1
GetPixel(list, 0, 0) // Equivalent
How am I supposed to know this? I can't see anything clearly specifying this in the documentation.
This is best done by using the script function with an incorrect parameter list, running the script, and observing the error output:

OCaml - Rejecting negative numbers

I want to create a program that sums two ints. However, they must be both positive.
I wanted to write something like this:
let n = read_int ()
while n<=0 do
let n = read_int ()
done
So it would read the number again, until it checks that it's positive.
What is the correct way to do this?
To add to the other answers, a recursive function that would do the job would look like
let rec read_next_non_negative_int () =
let n = read_int () in
if n < 0 then
read_next_non_negative_int ()
else
n
Let's figure out how it works. It reads an integer n. If n is negative, that's no good, so we try the whole thing again. However otherwise, we have found a non-negative n so we just return it.
This is a very basic example of a recursive function. Recursive functions always have :
A call to the function itself inside their own definition. Here, it's in the then clause of our test.
An recursion termination, which is basically a return statement in many languages, but in OCaml you don't explicitly writes return. Here, it's the n in the else statement.
Without a self-call, the function wouldn't be recursive, and without recursion termination, it would loop forever. So when trying to write a recursive function, always think "What's the scenario in which there is no need to call the function again", and write that as your recursion termination(s). Then think "And when is it that I need to call my function again, and with which parameter", and write the self-call(s).
Your code isn't syntactically valid, but even if it were valid it's based on the idea of changing the value of a variable, which isn't possible in OCaml.
In imperative programming you can change the value of n until it looks like what you want. However, OCaml is a functional language and its variables are bound to one value permanently.
The n that appears in while n < = ... is not the same n that appears in let n = read_int (). The keyword let introduces a new local variable.
It might help to imagine writing a recursive function that returns the next non-negative value that it reads in using read_int. If it doesn't get a good value, it can call itself recursively.

Clearing numerical values in Mathematica

I am working on fairly large Mathematica projects and the problem arises that I have to intermittently check numerical results but want to easily revert to having all my constructs in analytical form.
The code is fairly fluid I don't want to use scoping constructs everywhere as they add work overhead. Is there an easy way for identifying and clearing all assignments that are numerical?
EDIT: I really do know that scoping is the way to do this correctly ;-). However, for my workflow I am really just looking for a dirty trick to nix all numerical assignments after the fact instead of having the foresight to put down a Block.
If your assignments are on the top level, you can use something like this:
a = 1;
b = c;
d = 3;
e = d + b;
Cases[DownValues[In],
HoldPattern[lhs_ = rhs_?NumericQ] |
HoldPattern[(lhs_ = rhs_?NumericQ;)] :> Unset[lhs],
3]
This will work if you have a sufficient history length $HistoryLength (defaults to infinity). Note however that, in the above example, e was assigned 3+c, and 3 here was not undone. So, the problem is really ambiguous in formulation, because some numbers could make it into definitions. One way to avoid this is to use SetDelayed for assignments, rather than Set.
Another alternative would be to analyze the names in say Global' context (if that is the context where your symbols live), and then say OwnValues and DownValues of the symbols, in a fashion similar to the above, and remove definitions with purely numerical r.h.s.
But IMO neither of these approaches are robust. I'd still use scoping constructs and try to isolate numerics. One possibility is to wrap you final code in Block, and assign numerical values inside this Block. This seems a much cleaner approach. The work overhead is minimal - you just have to remember which symbols you want to assign the values to. Block will automatically ensure that outside it, the symbols will have no definitions.
EDIT
Yet another possibility is to use local rules. For example, one could define rule[a] = a->1; rule[d]=d->3 instead of the assignments above. You could then apply these rules, extracting them as say
DownValues[rule][[All, 2]], whenever you want to test with some numerical arguments.
Building on Andrew Moylan's solution, one can construct a Block like function that would takes rules:
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
You can then save your numeric rules in a variable, and use BlockRules[ savedrules, code ], or even define a function that would apply a fixed set of rules, kind of like so:
In[76]:= NumericCheck =
Function[body, BlockRules[{a -> 3, b -> 2`}, body], HoldAll];
In[78]:= a + b // NumericCheck
Out[78]= 5.
EDIT In response to Timo's comment, it might be possible to use NotebookEvaluate (new in 8) to achieve the requested effect.
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
nb = CreateDocument[{ExpressionCell[
Defer[Plot[Sin[a x], {x, 0, 2 Pi}]], "Input"],
ExpressionCell[Defer[Integrate[Sin[a x^2], {x, 0, 2 Pi}]],
"Input"]}];
BlockRules[{a -> 4}, NotebookEvaluate[nb, InsertResults -> "True"];]
As the result of this evaluation you get a notebook with your commands evaluated when a was locally set to 4. In order to take it further, you would have to take the notebook
with your code, open a new notebook, evaluate Notebooks[] to identify the notebook of interest and then do :
BlockRules[variablerules,
NotebookEvaluate[NotebookPut[NotebookGet[nbobj]],
InsertResults -> "True"]]
I hope you can make this idea work.