Why is 0 sometimes Numeric and sometimes not Numeric?
my #numbers = -1, 0, 1, 'hello';
.say for #numbers.grep( Numeric );
say "====";
for #numbers -> $n {
say $n if $n.Numeric;
}
#-1
#0
#1
#====
#-1
#1
The problem is in your interpretation of $n.Numeric. You appear to think that it returns a Bool to indicate whether something is Numeric (although your own example shows otherwise).
In any case, $n.Numeric COERCES to a Numeric value. But since 0 is already a Numeric value (as your grep example shows), it is actually a no-op.
Then why doesn't show it? Well, for the simple reason that 0.Bool is False, and 1.Bool and (-1).Bool are True. So in the if statement:
say $n if $n.Numeric;
0 will not be shown, because if conceptually coerces to Bool under the hood. And this will not fire, because 0.Numeric.Bool is False.
You probably wanted to do
say $n if $n ~~ Numeric;
THAT would test if $n has a Numeric value in it.
Related
How can I check if a variable is equal to something, and set to a new variable in the child scope?
For example:
bar = 'foobar'
my_slice = bar[:3]
if my_slice == 'foo':
print(my_slice)
It seems like the new walrus operator here would be useful, but it's not immediately straightforward how you'd go about using it here
Walrus operators work here very well, we just need to understand exactly how they work.
if (my_slice := bar[3:]) == 'foo':
print(my_slice)
Walrus operators set a variable to the output of some expression. It's almost identical to the equal sign in function except it can be used inline.
So this expression:
(my_slice := bar[3:]) == 'foo'
Can be boiled down to (variable = expression) == value
So because the output of my_slice := bar[:3] is equal to bar[:3], the above is equivalent to
bar[3:] == 'foo'
Note: The parenthesis here are required, or else variable is going to equal the output of the comparison operation, i.e. True or False
I'm working on this weeks PerlWChallenge.
You are given an array of integers #A. Write a script to create an
array that represents the smaller element to the left of each
corresponding index. If none found then use 0.
Here's my approach:
my #A = (7, 8, 3, 12, 10);
my $L = #A.elems - 1;
say gather for 1 .. $L -> $i { take #A[ 0..$i-1 ].grep( * < #A[$i] ).min };
Which kinda works and outputs:
(7 Inf 3 3)
The Infinity obviously comes from the empty grep. Checking:
> raku -e "().min.say"
Inf
But why is the minimum of an empty Seq Infinity? If anything it should be -Infinity. Or zero?
It's probably a good idea to test for the empty sequence anyway.
I ended up using
take .min with #A[ 0..$i-1 ].grep( * < #A[$i] ) or 0
or
take ( #A[ 0..$i-1 ].grep( * < #A[$i] ) or 0 ).min
Generally, Inf works out quite well in the face of further operations. For example, consider a case where we have a list of lists, and we want to find the minimum across all of them. We can do this:
my #a = [3,1,3], [], [-5,10];
say #a>>.min.min
And it will just work, since (1, Inf, -5).min comes out as -5. Were min to instead have -Inf as its value, then it'd get this wrong. It will also behave reasonably in comparisons, e.g. if #a.min > #b.min { }; by contrast, an undefined value will warn.
TL;DR say min displays Inf.
min is, or at least behaves like, a reduction.
Per the doc for reduction of a List:
When the list contains no elements, an exception is thrown, unless &with is an operator with a known identity value (e.g., the identity value of infix:<+> is 0).
Per the doc for min:
a comparison Callable can be specified with the named argument :by
by is min's spelling of with.
To easily see the "identity value" of an operator/function, call it without any arguments:
say min # Inf
Imo the underlying issue here is one of many unsolved wide challenges of documenting Raku. Perhaps comments here in this SO about doc would best focus on the narrow topic of solving the problem just for min (and maybe max and minmax).
I think, there is inspiration from
infimum
(the greatest lower bound). Let we have the set of integers (or real
numbers) and add there the greatest element Inf and the lowest -Inf.
Then infimum of the empty set (as the subset of the previous set) is the
greatest element Inf. (Every element satisfies that is smaller than
any element of the empty set and Inf is the greatest element that
satisfies this.) Minimum and infimum of any nonempty finite set of real
numbers are equal.
Similarly, min in Raku works as infimum for some Range.
1 ^.. 10
andthen .min; #1
but 1 is not from 1 ^.. 10, so 1 is not minimum, but it is infimum
of the range.
It is useful for some algorithm, see the answer by Jonathan
Worthington or
q{3 1 3
-2
--
-5 10
}.lines
andthen .map: *.comb( /'-'?\d+/ )».Int # (3, 1, 3), (-2,), (), (-5, 10)
andthen .map: *.min # 1,-2,Inf,-5
andthen .produce: &[min]
andthen .fmt: '%2d',',' # 1,-2,-2,-5
this (from the docs) makes sense to me
method min(Range:D:)
Returns the start point of the range.
say (1..5).min; # OUTPUT: «1»
say (1^..^5).min; # OUTPUT: «1»
and I think the infinimum idea is quite a good mnemonic for the excludes case which also could be 5.1^.. , 5.0001^.. etc.
Is it possible to avoid creating temporary scalars when returning multiple arrays from a function:
use v6;
sub func() {
my #a = 1..3;
my #b = 5..10;
return #a, #b;
}
my ($x, $y) = func();
my #x := $x;
my #y := $y;
say "x: ", #x; # OUTPUT: x: [1 2 3]
say "y: ", #y; # OUTPUT: y: [5 6 7 8 9 10]
I would like to avoid creating the temporary variables $x and $y.
Note: It is not possible to replace the function call with
my (#x, #y) = func()
since assignment of a list to an Array is eager and therefore both the returned arrays end up in #x.
Not either of:
my ($x, $y) = func();
my (#x, #y) = func();
But instead either of:
my (#x, #y) := func();
my ($x, $y) := func();
Use # to signal to P6 that, when it needs to distinguish whether something is singular -- "a single array" -- or plural -- "items contained in a single array" -- it should be treated as plural.
Use $ to signal the other way around -- it should be treated as singular.
You can always later explicitly reverse this by doing $#x -- to signal P6 should use the singular perspective for something you originally declared as plural -- or #$x to signal reversing the other way around.
For an analogy, think of a cake cut into several pieces. Is it a single thing or a bunch of pieces? Note also that # caches indexing of the pieces whereas $ just remembers that it's a cake. For large lists of things this can make a big difference.
I've been trying to exercise my Perl 6 chops by looking at some golfing problems. One of them involved extracting the bits of an integer. I haven't been able to come up with a succinct way to write such an expression.
My "best" tries so far follow, using 2000 as the number. I don't care whether the most or least significant bit comes first.
A numeric expression:
map { $_ % 2 }, (2000, * div 2 ... * == 0)
A recursive anonymous subroutine:
{ $_ ?? ($_ % 2, |&?BLOCK($_ div 2)) !! () }(2000)
Converting to a string:
2000.fmt('%b') ~~ m:g/./
Of these, the first feels cleanest to me, but it would be really nice to be able to generate the bits in a single step, rather than mapping over an intermediate list.
Is there a cleaner, shorter, and/or more idiomatic way to get the bits, using a single expression? (That is, without writing a named function.)
The easiest way would be:
2000.base(2).comb
The .base method returns a string representation, and .comb splits it into characters - similar to your third method.
An imperative solution, least to most significant bit:
my $i = 2000; say (loop (; $i; $i +>= 1) { $i +& 1 })
The same thing rewritten using hyperoperators on a sequence:
say (2000, * +> 1 ...^ !*) >>+&>> 1
An alternative that is more useful when you need to change the base to anything above 36, is to use polymod with an infinite list of that base.
Most of the time you will have to reverse the order though.
say 2000.polymod(2 xx *);
# (0 0 0 0 1 0 1 1 1 1 1)
say 2000.polymod(2 xx *).reverse;
say [R,] 2000.polymod(2 xx*);
# (1 1 1 1 1 0 1 0 0 0 0)
So I just found this bug in my code and I am wondering what rules I'm not understanding.
I have a float variable logDiff, that currently contains a very small number. I want to see if it's bigger than a constant expression (80% of a 12th). I read years ago in Code Complete to just leave calculated constants in their simplest form for readability, and the compiler (XCode 4.6.3) will inline them anyway. So I have,
if ( logDiff > 1/12 * .8 ) {
I'm assuming the .8 and the fraction all evaluates to the correct number. Looks legit:
(lldb) expr (float) 1/12 * .8
(double) $1 = 0.0666666686534882
(lldb) expr logDiff
(float) $2 = 0.000328541
But it always wrongly evaluates to true. Even when I mess with enclosing parens and stuff.
(lldb) expr logDiff > 1/12 * .8
(bool) $4 = true
(lldb) expr logDiff > (1/12 * .8)
(bool) $5 = true
(lldb) expr logDiff > (float)(1/12 * .8)
(bool) $6 = true
I found I have to explicitly spell at least one of them as floats to get the correct result,
(lldb) expr logDiff > (1.f/12.f * .8f)
(bool) $7 = false
(lldb) expr logDiff > (1/12.f * .8)
(bool) $8 = false
(lldb) expr logDiff > (1./12 * .8f)
(bool) $11 = false
(lldb) expr logDiff > (1./12 * .8)
(bool) $12 = false
but I recently read a popular style guide explicitly eschew these fancier numeric literals, apparently according to my assumption that the compiler would be smarter than me and Do What I Mean.
Should I always spell my numeric constants like 1.f if they might need to be a float? Sounds superstitious. Help me understand why and when it's necessary?
The expression 1/12 is an integer division. That means that the result will be truncated as zero.
When you do (float) 1/12 you cast the one as a float, and the whole expression becomes a floating point expression.
In C int/int gives an int. If you don't explicitly tell the compiler to convert at least one to a float, it will do the division and round down to the nearest int (in this case 0).
I note that the linked style guide actually says Avoid making numbers a specific type unless necessary. In this case it is needed as what you want is for the compiler to do some type conversions
An expression such as 1 / 4 is treated as integer division and hence has no decimal precision. In this specific case, the result will be 0. You can think of this as int / int implies int.
Should I always spell my numeric constants like 1.f if they might need to be a float? Sounds superstitious. Help me understand why and when it's necessary?
It's not superstitious, you are telling the compiler that these are type literals (floats as an example) and the compiler will treat all operations on them as such.
Moreover, you could cast an expression. Consider the following:
float result = ( float ) 1 / 4;
... I am casting 1 to be a float and hence the result of float / int will be float. See datatype operation precedence (or promotion).
That is simple. Per default, a numeric value is interpredted as an int.
There are math expresssions where that does not matter too much. But in case of divisions it can drive you crazy. (int) 1 / (int) 12 is not (float) 0.08333 but (int) 0.
1/12.0 would evaluate to (float) 0.83333.
Side note: When you go for float where you used int before there is one more trap waiting for you. That is when ever you compare values for equality.
float f = 12/12.0f;
if (f = 1) ... // this may not work out. Never expect a float to be of a specific value. They can vary slightly.
Better is:
if (abs(f - 1) < 0.0001) ... // this way yoru comparison is fuzzy enough for the variances that float values may come with.