Can you clone a Perl 6 Proc? - raku

I was playing with this in 2018.01:
my $proc = Proc.new: :out;
my $f = $proc.clone;
$f.spawn: 'ls';
put $f.out.slurp;
It says it can't do it. It's curious that the error message is about a routine I didn't use and a different class:
Cannot resolve caller stdout(Proc::Async: :bin); none of these signatures match:
(Proc::Async:D $: :$bin!, *%_)
(Proc::Async:D $: :$enc, :$translate-nl, *%_)
in block <unit> at proc-out.p6 line 3

Everything inherits a default clone method from Mu, which does a shallow clone, but that doesn't mean that everything makes sense to clone. This especially goes for objects that might hold references to OS-level things, such as Proc or IO::Handle. As the person who designed Proc::Async, I can say for certain that making it do anything useful on clone was not a design consideration. I didn't design Proc, but I suspect the same applies.
As for the error, keep in mind that the Perl 6 standard library is implemented in Perl 6 (a lot like in Java and .Net, but less like Perl 5 where many things that are provided by default go directly to something written in C). In this particular case, Proc is implemented in terms of Proc::Async. Rakudo tries to trim stack traces somewhat to eliminate calls inside of the setting, which is usually a win for the language user, but in cases like this can be a little less helpful. Running Rakudo with the --ll-exception flag provides the full details, and thus makes clearer what is going on.

Related

Alter how arguments are processed before they're passed to sub MAIN

Given the documentation and the comments on an earlier question, by request I've made a minimal reproducible example that demonstrates a difference between these two statements:
my %*SUB-MAIN-OPTS = :named-anywhere;
PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True;
Given a script file with only this:
#!/usr/bin/env raku
use MyApp::Tools::CLI;
and a module file in MyApp/Tools called CLI.pm6:
#PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True;
my %*SUB-MAIN-OPTS = :named-anywhere;
proto MAIN(|) is export {*}
multi MAIN( 'add', :h( :$hostnames ) ) {
for #$hostnames -> $host {
say $host;
}
}
multi MAIN( 'remove', *#hostnames ) {
for #hostnames -> $host {
say $host;
}
}
The following invocation from the command line will not result in a recognized subroutine, but show the usage:
mre.raku add -h=localhost -h=test1
Switching my %*SUB-MAIN-OPTS = :named-anywhere; for PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True; will print two lines with the two hostnames provided, as expected.
If however, this is done in a single file as below, both work identical:
#!/usr/bin/env raku
#PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True;
my %*SUB-MAIN-OPTS = :named-anywhere;
proto MAIN(|) is export {*}
multi MAIN( 'add', :h( :$hostnames )) {
for #$hostnames -> $host {
say $host;
}
}
multi MAIN( 'remove', *#hostnames ) {
for #hostnames -> $host {
say $host;
}
}
I find this hard to understand.
When reproducing this, be alert of how each command must be called.
mre.raku remove localhost test1
mre.raku add -h=localhost -h=test1
So a named array-reference is not recognized when this is used in a separate file with my %*SUB-MAIN-OPTS = :named-anywhere;. While PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True; always works. And for a slurpy array, both work identical in both cases.
The problem is that it isn't the same variable in both the script and in the module.
Sure they have the same name, but that doesn't mean much.
my \A = anon class Foo {}
my \B = anon class Foo {}
A ~~ B; # False
B ~~ A; # False
A === B; # False
Those two classes have the same name, but are separate entities.
If you look at the code for other built-in dynamic variables, you see something like:
Rakudo::Internals.REGISTER-DYNAMIC: '$*EXECUTABLE-NAME', {
PROCESS::<$EXECUTABLE-NAME> := $*EXECUTABLE.basename;
}
This makes sure that the variable is installed into the right place so that it works for every compilation unit.
If you look for %*SUB-MAIN-OPTS, the only thing you find is this line:
my %sub-main-opts := %*SUB-MAIN-OPTS // {};
That looks for the variable in the main compilation unit. If it isn't found it creates and uses an empty Hash.
So when you try do it in a scope other than the main compilation unit, it isn't in a place where it could be found by that line.
To test if adding that fixes the issue, you can add this to the top of the main compilation unit. (The script that loads the module.)
BEGIN Rakudo::Internals.REGISTER-DYNAMIC: '%*SUB-MAIN-OPTS', {
PROCESS::<%SUB-MAIN-OPTS> := {}
}
Then in the module, write this:
%*SUB-MAIN-OPTS = :named-anywhere;
Or better yet this:
%*SUB-MAIN-OPTS<named-anywhere> = True;
After trying this, it seems to work just fine.
The thing is, that something like that used to be there.
It was removed on the thought that it slows down every Raku program.
Though I think that any slowdown it caused would still be an issue as the line that is still there has to look to see if there is a dynamic variable of that name.
(There are more reasons given, and I frankly disagree with all of them.)
May a cuppa bring enlightenment to future SO readers pondering the meaning of things.[1]
Related answers by Liz
I think Liz's answer to an SO asking a similar question may be a good read for a basic explanation of why a my (which is like a lesser our) in the mainline of a module doesn't work, or at least confirmation that core devs know about it.
Her later answer to another SO explains how one can use my by putting it inside a RUN-MAIN.
Why does a slurpy array work by default but not named anywhere?
One rich resource on why things are the way they are is the section Declaring a MAIN subroutine of S06 (Synopsis on Subroutines)[2].
A key excerpt:
As usual, switches are assumed to be first, and everything after the first non-switch, or any switches after a --, are treated as positionals or go into the slurpy array (even if they look like switches).
So it looks like this is where the default behavior, in which nameds can't go anywhere, comes from; it seems that #Larry[3] was claiming that the "usual" shell convention was as described, and implicitly arguing that this should dictate that the default behavior was as it is.
Since Raku was officially released RFC: Allow subcommands in MAIN put us on the path to todays' :named-anywhere option. The RFC presented a very powerful 1-2 punch -- an unimpeachable two line hackers' prose/data argument that quickly led to rough consensus, with a working code PR with this commit message:
Allow --named-switches anywhere in command line.
Raku was GNU-like in that it has '--double-dashes' and that it stops interpreting named parameters when it encounters '--', but unlike GNU-like parsing, it also stopped interpreting named parameters when encountering any positional argument. This patch makes it a bit more GNU-like by allowing named arguments after a positional, to prepare for allowing subcommands.
> Alter how arguments are processed before they're passed to sub MAIN
In the above linked section of S06 #Larry also wrote:
Ordinarily a top-level Raku "script" just evaluates its anonymous mainline code and exits. During the mainline code, the program's arguments are available in raw form from the #*ARGS array.
The point here being that you can preprocess #*ARGS before they're passed to MAIN.
Continuing:
At the end of the mainline code, however, a MAIN subroutine will be called with whatever command-line arguments remain in #*ARGS.
Note that, as explained by Liz, Raku now has a RUN-MAIN routine that's called prior to calling MAIN.
Then comes the standard argument processing (alterable by using standard options, of which there's currently only the :named-anywhere one, or userland modules such as SuperMAIN which add in various other features).
And finally #Larry notes that:
Other [command line parsing] policies may easily be introduced by calling MAIN explicitly. For instance, you can parse your arguments with a grammar and pass the resulting Match object as a Capture to MAIN.
A doc fix?
Yesterday you wrote a comment suggesting a doc fix.
I now see that we (collectively) know about the coding issue. So why is the doc as it is? I think the combination of your SO and the prior ones provide enough anecdata to support at least considering filing a doc issue to the contrary. Then again Liz hints in one of the SO's that a fix might be coming, at least for ours. And SO is itself arguably doc. So maybe it's better to wait? I'll punt and let you decide. At least you now have several SOs to quote if you decide to file a doc issue.
Footnotes
[1] I want to be clear that if anyone perceives any fault associated with posting this SO then they're right, and the fault is entirely mine. I mentioned to #acw that I'd already done a search so they could quite reasonably have concluded there was no point in them doing one as well. So, mea culpa, bad coffee inspired puns included. (Bad puns, not bad coffee.)
[2] Imo these old historical speculative design docs are worth reading and rereading as you get to know Raku, despite them being obsolete in parts.
[3] #Larry emerged in Raku culture as a fun and convenient shorthand for Larry Wall et al, the Raku language team led by Larry.

inspect regex made with a variable

sub make-regex {
my $what's-in-the-box = rand > .5 ?? 'x' !! 'y';
/$what's-in-the-box/
}
my $lol-who-knows = make-regex;
$lol-who-knows.gist.say;
How do you see the innards of the regex (i.e. x or y)? Forcing a match is not a solution.
How do you see the innards of the regex (i.e. x or y)?
You:
Statically compile the regex. Doing so will involve use of a raku compiler, i.e. Rakudo.
Dynamically evaluate the regex so that the $what's-in-the-box variable in your regex gets interpolated and turns into x or y. Doing so will involve running the regex as code. That in turn means both using Rakudo and running the regex with a Match object invocant (or sub-class instance or, conceivably, a mock object equivalent).
View the resulting regex. Doing so will involve using compiler (Rakudo) toolchain specific introspection or debug functionality.
Forcing a match is not a solution.
You can run a regex with no input if your concern is just to avoid the need for a successful match:
say Match.new.&$lol-who-knows; # #<failed match>
But it must be run, otherwise the $what's-in-the-box variable won't turn into an x or y. You might think you could cheat on this by writing something that mimics this part of raku regex construction/use but there's good reason to think that's not going to work out[1].
And then you must view its innards, using Rakudo toolchain features, after you've started running it and before it finishes running.
A regex is a method
In raku, a Regex is code, a sub-class of Method.
Until you run it, by using it in a match, that code is just /$what's-in-the-box/ (aka regex { $what's-in-the-box }). It's the equivalent of something like:
method {
...self should be a Match or a sub-class of it
...if self is not an instance, create a new one
...do matching -- compiler evaluates $what's-in-the-box during this
...return updated self / new instance
}
(To see a bit more detail, see Moritz's answer to the SO Can I change the slang inside a method?.)
A routine is a closure
You create your regex inside the closure named make-regex. The compiler spots that you've used $what's-in-the-box and therefore hangs on to the variable even after the make-regex closure returns. Later on, if/when your regex is run, the $what's-in-the-box variable will be replaced by its value at the moment the regex is run.
Footnotes
[1] There are plenty of huge complications you'll encounter if you try to do an end-run around using the compiler. Even something simple like interpolation is non-trivial. To quote Jonathan Worthington, in 2020:
I thought oh I'm only building [a tool that's] not the real compiler. I can ... have a simpler model of the symbol resolution. In the end it turned out that no I couldn't really. That started to create us some problems. When we aligned the way that we resolved symbols with the same algorithms and lookup structures that was being used in the compiler then suddenly it all became a lot simpler.

Is it OK to leave the [ and ] out for messages like at:ifAbsent: if you don't need a full block?

In Smalltalk (and specifically Pharo/Squeak) I want to know if it is OK to leave out the "[" and "]" for the argument to messages like at:ifAbsent: if you don't need a block, like this;
^ bookTitles at: bookID ifAbsent: ''.
and
^ books at: bookID ifAbsent: nil.
the code works because (in Pharo/Squeak) Object>>value just returns self. But I want to know how accepted this use is or if you should always type the [ and ] even when you don't care if the argument is evaluated quickly or more than once.
The signature:
at: key ifAbsent: aBlock
declares an intention of using a block as a 2nd parameter...
But Smalltalk is not a strongly typed language, so, what kind of objects can you pass there? any kind that understand the message #value, so, be careful about each particular meaning of #value in each case, but take advantages of polymorphism!
Not all Smalltalk dialects implement #value on Object out of the box, so your code might not run on other Smalltalk dialects, IF you hand in an object that does not understand #value.
It is okay to pass objects of whatever kind as long as you know that what #value does is what you expect,
Your code may look strange to people who come from other smalltalk dialects or are new to Smalltallk, because they learned that what you pass in here is a Block, but so does sending messages like #join: to a Collection of Strings...
In the end, I'd say don't worry if portability is not a major issue to you.
This is what Pharo’s Code Critics say about similar situations:
Non-blocks in special messages:
Checks for methods that don't use blocks in the special messages.
People new to Smalltalk might write code such as: "aBoolean ifTrue:
(self doSomething)" instead of the correct version: "aBoolean ifTrue:
[self doSomething]". Even if these pieces of code could be correct,
they cannot be optimized by the compiler.
This rule can be found in Optimization, so you could probably ignore it, but i think it is nicer anyway to use a block.
Update:
at:ifAbsent: is not triggered by this rule. And it is not optimized by the compiler. So optimization is no reason to use blocks in this case.
I would say that it is not a good idea to leave them out. The argument will be evaluated eagerly if you leave out the parens, and will be sent #value. So if "slef doSomething" has side-effects, that would be bad. It could also be bad if #value does something you don't expect e.g. the perhaps contrived
bookTitles at: bookID ifAbsent: 'Missing title' -> 'ISBN-000000'
If your code works and you are the only person to view the source, then its ok. If others are to view the source then I would say a empty block [] would have been more readable. But generally speaking if you really care about bugs its a good idea not to venture outside standard practices because there is no way to guarantee that you wont have any problem.

Use of recursion in Scala when run in the JVM

From searching elsewhere on this site and the web, tail call optimization is not supported by the JVM. Does that therefore mean that tail recursive Scala code such as the following, which may run on very large input lists, should not be written if it is to run on the JVM?
// Get the nth element in a list
def nth[T](n : Int, list : List[T]) : T = list match {
case Nil => throw new IllegalArgumentException
case _ if n == 0 => throw new IllegalArgumentException
case _ :: tail if n == 1 => list.head
case _ :: tail => nth(n - 1, tail)
}
Martin Odersky's Scala by Example contains the following paragragh which seems to suggests that there are circumstances or other environments where recursion is appropriate:
In principle, tail calls can always re-use the stack frame of the calling
function. However, some run-time environments (such as the Java VM) lack the
primitives to make stack frame re-use for tail calls efficient. A production quality
Scala implementation is therefore only required to re-use the stack frame of a di-
rectly tail-recursive function whose last action is a call to itself. Other tail calls might
be optimized also, but one should not rely on this across implementations.
Can anyone explain what this middle two sentences of this paragraph mean?
Thank you!
Since direct tail recursion is equivalent to a while loop, your example will run efficiently on the JVM because the Scala compiler can compile this to a loop under the hood, simply using a jump. General TCO however is not supported on the JVM, although there is available the tailcall() method which supports tail calls using compiler-generated trampolines.
To ensure that the compiler can correctly optimize a tail-recursive function to a loop, you can use the scala.annotation.tailrec annotation, which will cause a compiler error if the compiler cannot make the desired optimization:
import scala.annotation.tailrec
#tailrec def nth[T](n : Int, list : List[T]) : Option[T] = list match {
case Nil => None
case _ if n == 0 => None
case _ :: tail if n == 1 => list.headOption
case _ :: tail => nth(n - 1, tail)
}
(screw IllegalArgmentException!)
In principle, tail calls can always re-use the stack frame of the calling
function. However, some runtime environments (such as the Java VM) lack the
primitives to make stack frame re-use for tail calls efficient. A production quality
Scala implementation is therefore only required to re-use the stack frame of a di
rectly tail-recursive function whose last action is a call to itself. Other tail calls might
be optimized also, but one should not rely on this across implementations.
Can anyone explain what this middle two sentences of this paragraph mean?
Tail recursion is a special case of a tail call. Direct tail recursion is a special case of tail recursion. Only direct tail recursion is guaranteed to be optimized. Others may be optimized, too, but that's basically just a compiler optimization. As a language feature, Scala only guarantees direct tail recursion elimination.
So, what's the difference?
Well, a tail call is simply the last call in a subroutine:
def a = {
b
c
}
In this case, the call to c is a tail call, the call to b is not.
Tail recursion is when a tail call calls a subroutine that was already called before:
def a = {
b
}
def b = {
a
}
This is tail recursion: a calls b (a tail call), which in turn calls a again. (In contrast to the direct tail recursion described below, this is sometimes called indirect tail recursion.)
However, none of the two examples will get optimized by Scala. Or, more precisely: a Scala implementation is allowed to optimize them, but it is not required to do so. This is in contrast to, e.g. Scheme, where the language specification guarantees that all of the above cases will take O(1) stack space.
The Scala Language Specification only guarantees that direct tail recursion is optimized, i.e. when a subroutine directly calls itself with no other calls in between:
def a = {
b
a
}
In this case, the call to a is a tail call (because it is the last call in the subroutine), it is tail recursion (because it calls itself again) and most importantly it is direct tail recursion, because a directly calls itself without going through another call first.
Note that there are many subtle things that may lead to a method not being directly tail-recursive. For example, if a is overloaded, then the recursion may actually go through different overloads, and thus would no longer be direct.
In practice, this means two things:
you cannot perform an Extract Method Refactoring on a tail-recursive method, at least not including the tail call, because this would turn a directly tail-recursive method (which will get optimized) into an indirectly tail-recursive method (which will not get optimized).
You can only use direct tail recursion. A tail-recursive descent parser, or a state machine, which can be very elegantly expressed using indirect tail recursion, are out.
The main reason for this is that when your underlying execution engine lacks powerful control flow manipulation features such as GOTO, continuations, first-class mutable stack or proper tail calls, then you need to either implement your own stack on top of it, use trampolines, make a global CPS transform or something similarly nasty, in order to provide generalized proper tail calls. All of these have either severe impact on performance or interoperability with other code on the same platform.
Or, as Rich Hickey, the creator of Clojure, said when he was facing the same problem: "Performance, Java interop, tail calls. Pick two." Both Clojure and Scala chose to compromise on tail calls and provide only tail recursion and not full tail calls.
To cut a long story short: yes, the specific example you posted will be optimized, since it is direct tail recursion. You can test this by putting an #tailrec annotation on the method. The annotation does not change whether or not the method gets optimized, it does however guarantee that you will get a compile error if the method can not be optimized.
Due to the above-mentioned subtleties, it is generally a good idea to put an #tailrec annotation on methods that you need to be optimized, both in order to get a compile error, but also as a hint to other developers maintaining your code.
The Scala compiler will attempt to optimize tail calls by "flattening" them into a loop that won't cause a continually expanding stack.
Of course, your code has to be optimizable for it to do so. If you use the annotation #tailrec before your method however (scala.annotation.tailrec) the compiler will REQUIRE the method be optimizable or fail to compile.
Martin's remark is saying that only directly self-recursive calls are candidates (other criteria being met) for the TCO optimization. Indirect, mutually recursive method pairs (or larger sets of recursive methods) cannot be so optimized.
Note that there are JVMs that support tail call optimization (IIRC, IBM's J9 does), it's just not a requirement in the JLS, and Oracle's implementation doesn't do it.

How is the `*var-name*` naming-convention used in clojure?

As a non-lisper coming to clojure how should I best understand the naming convention where vars get a name like *var-name*?
This appears to be a lisp convention indicating a global variable. But in clojure such vars appear in namespaces as far as I can tell.
I would really appreciate a brief explanation of what I should expect when an author has used such vars in their code, ideally with a example of how and why such a var would be used and changed in a clojure library.
It's a convention used in other Lisps, such as Common Lisp, to distinguish between special variables, as distinct from lexical variables. A special or dynamic variable has its binding stored in a dynamic environment, meaning that its current value as visible to any point in the code depends upon how it may have been bound higher up the call stack, as opposed to being dependent only on the most local lexical binding form (such as let or defn).
Note that in his book Let Over Lambda, Doug Hoyte argues against the "earmuffs" asterix convention for naming special variables. He uses an unusual macro style that makes reference to free variables, and he prefers not to commit to or distinguish whether those symbols will eventually refer to lexical or dynamic variables.
Though targeted specifically at Common Lisp, you might enjoy Ron Garret's essay The Idiot's Guide to Special Variables. Much of it can still apply to Clojure.
Functional programming is all about safe predictable functions. Infact some of us are afraid of that spooky "action at a distance" thing. When people call a function they get a warm fuzzy satisfaction that the function will always give them the same result if they call the function or read the value again. the *un-warm-and-fuzzy* bristly things exist to warn programmers that this variable is less cuddly than some of the others.
Some references I found in the Clojure newsgroups:
Re: making code readable
John D. Hume
Tue, 30 Dec 2008 08:30:57 -0800
On Mon, Dec 29, 2008 at 4:10 PM, Chouser wrote:
I believe the idiom for global values like this is to place asterisks
around the name.
I thought the asterisk convention was for variables intended for
dynamic binding. It took me a minute to figure out where I got that
idea. "Programming Clojure" suggests it (without quite saying it) in
chapter 6, section 3.
"Vars intended for dynamic binding are sometimes called special vari-
ables. It is good style to name them
with leading and trailing asterisks."
Obviously the book's a work in progress, but that does sound
reasonable. A special convention for variables whose values change (or
that my code's welcome to rebind) seems more useful to me than one for
"globals" (though I'm not sure I'd consider something like grid-size
for a given application a global). Based on ants.clj it appears Rich
doesn't feel there needs to be a special naming convention for that
sort of value.
and...
I believe the idiom for global values like this is to place asterisks
around the name. Underscores (and CamelCase) should only be used when
required for Java interop:
(def *grid-size* 10)
(def *height* 600)
(def *margin* 50)
(def *x-index* 0)
(def *y-index* 1)