How do I invoke a Java method from perl6 - jvm

use java::util::zip::CRC32:from<java>;
my $crc = CRC32.new();
for 'Hello, Java'.encode('utf-8') {
$crc.'method/update/(B)V'($_);
}
say $crc.getValue();
sadly, this does not work
Method 'method/update/(B)V' not found for invocant of class 'java.util.zip.CRC32'
This code is available at the following links. It is the only example I've been able to find
Rakudo Perl 6 on the JVM (slides)
Perl 6 Advent Calendar: Day 03 – Rakudo Perl 6 on the JVM

Final answer
Combining the code cleanups explained in the Your answer cleaned up section below with Pepe Schwarz's improvements mentioned in the Expectation alert section below we get:
use java::util::zip::CRC32:from<Java>;
my $crc = CRC32.new();
for 'Hello, Java'.encode('utf-8').list {
$crc.update($_);
}
say $crc.getValue();
Your answer cleaned up
use v6;
use java::util::zip::CRC32:from<Java>;
my $crc = CRC32.new();
for 'Hello, Java'.encode('utf-8').list { # Appended `.list`
$crc.'method/update/(I)V'($_);
}
say $crc.getValue();
One important changed bit is the appended .list.
The 'Hello, Java'.encode('utf-8') fragment returns an object, a utf8. That object returns just one value (itself) to the for statement. So the for iterates just once, passing the object to the code block with the update line in it.
Iterating just once could make sense if the update line was .'method/update/([B)V', which maps to a Java method that expects a buffer of 8 bit ints, which is essentially what a Perl 6 utf8 is. However, that would require some support Perl 6 code (presumably in the core compiler) to marshal (automagically convert) the Perl 6 utf8 into a Java buf[] and if that code ever existed/worked it sure isn't working when I test with the latest Rakudo.
But if one appends a judicious .list as shown above and changes the code block to match, things work out.
First, the .list results in the for statement iterating over a series of integers.
Second, like you, I called the Integer arg version of the Java method (.'method/update/(I)V') instead of the original buffer arg version and the code then worked correctly. (This means that the binary representation of the unsigned 8 bit integers returned from the Perl 6 utf8 object is either already exactly what the Java method expects or is automagically marshaled for you.)
Another required change is that the from<java> needs to be from<Java> per your comment below -- thanks.
Expectation alert
As of Jan 2015:
Merely using the JVM backend for Rakudo/NQP (i.e. running pure P6 code on a JVM) still needs more hardening before it can be officially declared ready for production use. (This is in addition to the all round hardening that the entire P6 ecosystem is expected to undergo this year.) The JVM backend will hopefully get there in 2015 -- it will hopefully be part of the initial official launch of Perl 6 being ready for production use this year -- but that's going to largely depend on demand and on there being more devs using it and contributing patches.
P6 code calling Java code is an additional project. Pepe Schwarz has made great progress in the last couple months in getting up to speed, learning the codebase and landing commits. He has already implemented the obviously nicer shortname calling shown at the start of this answer and completed a lot more of the marshaling logic for converting between P6 and Java types and is actively soliciting feedback and requests for specific improvements.

The code which is responsible for this area of Java interop is found in the class org.perl6.nqp.runtime.BootJavaInterop. It suggests that the overloaded methods are identified by the string method/<name>/<descriptor>. The descriptor is computed in function org.objectweb.asm.Type#getMethodDescriptor. That jar is available through maven from http://mvnrepository.com/artifact/asm/asm.
import java.util.zip.CRC32
import org.objectweb.asm.Type
object MethodSignatures {
def printSignature(cls: Class[_], method: String, params: Class[_]): Unit = {
val m = cls.getMethod(method, params)
val d = Type.getMethodDescriptor(m)
println(m)
println(s"\t$d")
}
def main(args: Array[String]) {
val cls = classOf[CRC32]
# see https://docs.oracle.com/javase/8/docs/api/java/util/zip/CRC32.html
val ab = classOf[Array[Byte]]
val i = classOf[Int]
printSignature(cls, "update", ab)
printSignature(cls, "update", i)
}
}
This prints
public void java.util.zip.CRC32.update(byte[])
([B)V
public void java.util.zip.CRC32.update(int)
(I)V
Since I want to call the update(int) variant of this overloaded method, the correct method invocation (on line 5 of the example program) is
$crc.'method/update/(I)V'($_);
This crashes with
This representation can not unbox to a native int
finally, for some reason I do not understand, changing the same line to
$crc.'method/update/(I)V'($_.Int);
fixes that and the example runs fine.
The final version of the code is
use v6;
use java::util::zip::CRC32:from<java>;
my $crc = CRC32.new();
for 'Hello, Java'.encode('utf-8') {
$crc.'method/update/(I)V'($_.Int);
}
say $crc.getValue();

I got this to work on Perl 6.c with following modification (Jan 4, 2018):
use v6;
use java::util::zip::CRC32:from<JavaRuntime>;
my $crc = CRC32.new();
for 'Hello, Java'.encode('utf-8').list {
$crc.update($_);
}
say $crc.getValue();
Resulting in:
% perl6-j --version
This is Rakudo version 2017.12-79-g6f36b02 built on JVM
implementing Perl 6.c.
% perl6-j crcjava.p6
1072431491

Related

Kotlin Path.useLines { } - How not to get IOException("Stream closed")?

Kotlin has nice wrappers and shortcuts, but sometimes I get caught not understanding them.
I have this simplified code:
class PipeSeparatedItemsReader (private val filePath: Path) : ItemsReader {
override fun readItems(): Sequence<ItemEntry> {
return filePath.useLines { lines ->
lines.map { ItemEntry("A","B","C","D",) }
}
}
And then I have:
val itemsPath = Path(...).resolve()
val itemsReader = PipeSeparatedItemsReader(itemsPath)
for (itemEntry in itemsReader.readItems())
updateItem(itemEntry)
// I have also tried itemsReader.readItems().forEach { ... }
Which is quite straightforward - I expect this code to give me a sequence which opens a file and reads the lines, parses them, and gives ItemEntrys, and when used up, close the file.
What I get, however, is IOException("Stream closed").
Somehow, even before the first item is read (I have debugged), somewhere within Kotlin's wrappers, the reader.in becomes null, so this exception is thrown in hasNext().
I have seen a similar question here: Kotlin to chain multiple sequences from different InputStream?
That one includes a lot of Java boilerplate which I would like to avoid.
How should I code this sequence using Path.useLines()?
Every Kotlin helper with "use" in the name closes the underlying resource at the end of the lambda you pass (at least that's a convention in the stdlib as far as I know). The most common example being AutoCloseable.use.
The Path.useLines extension is no exception:
Calls the block callback giving it a sequence of all the lines in this file and closes the reader once the processing is complete. [emphasis mine]
This means useLines closes the sequence of lines once the block is done, and thus you cannot return a lazy sequence out of it because you can't use it after the useLines function returns.
So, if you want to return a sequence of lines for later use, you cannot return a transformed sequence from that of useLines directly. Sequences actually cannot detect when someone is done using them, hence why useLines needs a lambda to give a "scope" or "lifetime" to the sequence and know when to close the underlying reader.
If you want to wrap this, you have 2 major options: either split the sequence operation and the close operation (make your PipeSeparatedItemsReader closeable), or use a lambda to process things in-place in readItems() the same way useLines does.

Alter how arguments are processed before they're passed to sub MAIN

Given the documentation and the comments on an earlier question, by request I've made a minimal reproducible example that demonstrates a difference between these two statements:
my %*SUB-MAIN-OPTS = :named-anywhere;
PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True;
Given a script file with only this:
#!/usr/bin/env raku
use MyApp::Tools::CLI;
and a module file in MyApp/Tools called CLI.pm6:
#PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True;
my %*SUB-MAIN-OPTS = :named-anywhere;
proto MAIN(|) is export {*}
multi MAIN( 'add', :h( :$hostnames ) ) {
for #$hostnames -> $host {
say $host;
}
}
multi MAIN( 'remove', *#hostnames ) {
for #hostnames -> $host {
say $host;
}
}
The following invocation from the command line will not result in a recognized subroutine, but show the usage:
mre.raku add -h=localhost -h=test1
Switching my %*SUB-MAIN-OPTS = :named-anywhere; for PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True; will print two lines with the two hostnames provided, as expected.
If however, this is done in a single file as below, both work identical:
#!/usr/bin/env raku
#PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True;
my %*SUB-MAIN-OPTS = :named-anywhere;
proto MAIN(|) is export {*}
multi MAIN( 'add', :h( :$hostnames )) {
for #$hostnames -> $host {
say $host;
}
}
multi MAIN( 'remove', *#hostnames ) {
for #hostnames -> $host {
say $host;
}
}
I find this hard to understand.
When reproducing this, be alert of how each command must be called.
mre.raku remove localhost test1
mre.raku add -h=localhost -h=test1
So a named array-reference is not recognized when this is used in a separate file with my %*SUB-MAIN-OPTS = :named-anywhere;. While PROCESS::<%SUB-MAIN-OPTS><named-anywhere> = True; always works. And for a slurpy array, both work identical in both cases.
The problem is that it isn't the same variable in both the script and in the module.
Sure they have the same name, but that doesn't mean much.
my \A = anon class Foo {}
my \B = anon class Foo {}
A ~~ B; # False
B ~~ A; # False
A === B; # False
Those two classes have the same name, but are separate entities.
If you look at the code for other built-in dynamic variables, you see something like:
Rakudo::Internals.REGISTER-DYNAMIC: '$*EXECUTABLE-NAME', {
PROCESS::<$EXECUTABLE-NAME> := $*EXECUTABLE.basename;
}
This makes sure that the variable is installed into the right place so that it works for every compilation unit.
If you look for %*SUB-MAIN-OPTS, the only thing you find is this line:
my %sub-main-opts := %*SUB-MAIN-OPTS // {};
That looks for the variable in the main compilation unit. If it isn't found it creates and uses an empty Hash.
So when you try do it in a scope other than the main compilation unit, it isn't in a place where it could be found by that line.
To test if adding that fixes the issue, you can add this to the top of the main compilation unit. (The script that loads the module.)
BEGIN Rakudo::Internals.REGISTER-DYNAMIC: '%*SUB-MAIN-OPTS', {
PROCESS::<%SUB-MAIN-OPTS> := {}
}
Then in the module, write this:
%*SUB-MAIN-OPTS = :named-anywhere;
Or better yet this:
%*SUB-MAIN-OPTS<named-anywhere> = True;
After trying this, it seems to work just fine.
The thing is, that something like that used to be there.
It was removed on the thought that it slows down every Raku program.
Though I think that any slowdown it caused would still be an issue as the line that is still there has to look to see if there is a dynamic variable of that name.
(There are more reasons given, and I frankly disagree with all of them.)
May a cuppa bring enlightenment to future SO readers pondering the meaning of things.[1]
Related answers by Liz
I think Liz's answer to an SO asking a similar question may be a good read for a basic explanation of why a my (which is like a lesser our) in the mainline of a module doesn't work, or at least confirmation that core devs know about it.
Her later answer to another SO explains how one can use my by putting it inside a RUN-MAIN.
Why does a slurpy array work by default but not named anywhere?
One rich resource on why things are the way they are is the section Declaring a MAIN subroutine of S06 (Synopsis on Subroutines)[2].
A key excerpt:
As usual, switches are assumed to be first, and everything after the first non-switch, or any switches after a --, are treated as positionals or go into the slurpy array (even if they look like switches).
So it looks like this is where the default behavior, in which nameds can't go anywhere, comes from; it seems that #Larry[3] was claiming that the "usual" shell convention was as described, and implicitly arguing that this should dictate that the default behavior was as it is.
Since Raku was officially released RFC: Allow subcommands in MAIN put us on the path to todays' :named-anywhere option. The RFC presented a very powerful 1-2 punch -- an unimpeachable two line hackers' prose/data argument that quickly led to rough consensus, with a working code PR with this commit message:
Allow --named-switches anywhere in command line.
Raku was GNU-like in that it has '--double-dashes' and that it stops interpreting named parameters when it encounters '--', but unlike GNU-like parsing, it also stopped interpreting named parameters when encountering any positional argument. This patch makes it a bit more GNU-like by allowing named arguments after a positional, to prepare for allowing subcommands.
> Alter how arguments are processed before they're passed to sub MAIN
In the above linked section of S06 #Larry also wrote:
Ordinarily a top-level Raku "script" just evaluates its anonymous mainline code and exits. During the mainline code, the program's arguments are available in raw form from the #*ARGS array.
The point here being that you can preprocess #*ARGS before they're passed to MAIN.
Continuing:
At the end of the mainline code, however, a MAIN subroutine will be called with whatever command-line arguments remain in #*ARGS.
Note that, as explained by Liz, Raku now has a RUN-MAIN routine that's called prior to calling MAIN.
Then comes the standard argument processing (alterable by using standard options, of which there's currently only the :named-anywhere one, or userland modules such as SuperMAIN which add in various other features).
And finally #Larry notes that:
Other [command line parsing] policies may easily be introduced by calling MAIN explicitly. For instance, you can parse your arguments with a grammar and pass the resulting Match object as a Capture to MAIN.
A doc fix?
Yesterday you wrote a comment suggesting a doc fix.
I now see that we (collectively) know about the coding issue. So why is the doc as it is? I think the combination of your SO and the prior ones provide enough anecdata to support at least considering filing a doc issue to the contrary. Then again Liz hints in one of the SO's that a fix might be coming, at least for ours. And SO is itself arguably doc. So maybe it's better to wait? I'll punt and let you decide. At least you now have several SOs to quote if you decide to file a doc issue.
Footnotes
[1] I want to be clear that if anyone perceives any fault associated with posting this SO then they're right, and the fault is entirely mine. I mentioned to #acw that I'd already done a search so they could quite reasonably have concluded there was no point in them doing one as well. So, mea culpa, bad coffee inspired puns included. (Bad puns, not bad coffee.)
[2] Imo these old historical speculative design docs are worth reading and rereading as you get to know Raku, despite them being obsolete in parts.
[3] #Larry emerged in Raku culture as a fun and convenient shorthand for Larry Wall et al, the Raku language team led by Larry.

How do I do LISP (apply) in Dart?

The docs say to use f.call.apply(arguments), but that seems to only work for object methods, not functions.
testapply.dart:
#!/usr/bin/env dart
say(a, b, c) {
print("${a}!");
print("${b}!");
print("${c}!");
}
main() {
var args = [1, 2, 3];
say.call.apply(args);
}
Trace:
$ dart testapply.dart
'/Users/andrew/Desktop/testapply.dart': Error: line 11 pos 2: Unresolved identifier 'Function 'say': static.'
say.call.apply(args);
^
Is there a way to do LISP (apply f args) without using objects?
Alternatively, is there a way to dynamically wrap an arbitrary function in an object so that it can be applied using f.call.apply(arguments)?
Alternatively, can Dart curry?
The documentation page you refer to says,
This functionality is not yet implemented, but is specified as part of version 0.07 of the Dart Programming Language Specification. Hopefully it will find its way into our implementations in due course, though it may be quite a while.
That may or may not be the issue...

Can I introspect the name of the main.main package?

This is a fairly niche problem, but I'm currently trying to write a conventions-based settings storage library with golang. It would be a great API boon if I could programmatically determine the running package name that wants to store something (eg "github.net/author/projectname/pkg") calling my library function.
With Python a similar thing could be achieved with the inspect module, or even with __main__.__file__ and a look at the file system.
You can get similar information if you use the following functions:
runtime.Caller
runtime.FuncForPC
The code may look like this:
pc, file, line, ok := runtime.Caller(1)
if !ok { /*failed*/ }
println(pc, file, line, ok)
f := runtime.FuncForPC(pc)
if f == nil { /*failed*/ }
println(f.Name())
If I put the above code (with the 1st line changed into runtime.Caller(0)) into a (randomly chosen) Go library which I have installed in GOROOT, it prints:
134626026 /tmp/go-build223663414/github.com/mattn/go-gtk/gtk/_obj/gtk.cgo1.go -4585 true
github.com/mattn/go-gtk/gtk.Init
Or it prints:
134515752 /home/user/go/src/github.com/mattn/go-gtk/example/event/event.go 12 true
main.main
The filename on the 1st line, and the 2nd line, seem to contain the information you are looking for.
There are two problems:
It may give incorrect result if functions are automatically inlined by the compiler
For any function F defined in package main, the function name is just main.F. For example, if runtime.Caller(0) is called from main(), the function name is main.main even if the main() function is defined in a Go file found in GOROOT/src/github.com/mattn/go-gtk/.... In this case, the output from runtime.Caller is more useful than the output from runtime.FuncForPC.

Static testing for Scala

There are some nice libraries for testing in Scala (Specs, ScalaTest, ScalaCheck). However, with Scala's powerful type system, important parts of an API being developed in Scala are expressed statically, usually in the form of some undesirable or disallowed behavior being prevented by the compiler.
So, what is the best way to test whether something is prevented by the compiler when designing an library or other API? It is unsatisfying to comment out code that is supposed to be uncompilable and then uncomment it to verify.
A contrived example testing List:
val list: List[Int] = List(1, 2, 3)
// should not compile
// list.add("Chicka-Chicka-Boom-Boom")
Does one of the existing testing libraries handle cases like this? Is there an approach that people use that works?
The approach I was considering was to embed code in a triple-quote string or an xml element and call the compiler in my test. Calling code looking something like this:
should {
notCompile(<code>
val list: List[Int] = List(1, 2, 3)
list.add("Chicka-Chicka-Boom-Boom")
</code>)
}
Or, something along the lines of an expect-type script called on the interpreter.
I have created some specs executing some code snippets and checking the results of the interpreter.
You can have a look at the Snippets trait. The idea is to store in some org.specs.util.Property[Snippet] the code to execute:
val it: Property[Snippet] = Property(Snippet(""))
"import scala.collection.List" prelude it // will be prepended to any code in the it snippet
"val list: List[Int] = List(1, 2, 3)" snip it // snip some code (keeping the prelude)
"list.add("Chicka-Chicka-Boom-Boom")" add it // add some code to the previously snipped code. A new snip would remove the previous code (except the prelude)
execute(it) must include("error: value add is not a member of List[Int]") // check the interpreter output
The main drawback I found with this approach was the slowness of the interpreter. I don't know yet how this could be sped up.
Eric.