How can I find programmatically all the classes, grammars, and roles in a Raku package? (Specified with a string.)
I examined discussions/posts similar to the ones linked below, but the code I came up with is very hard to use. (And does not do the job.)
Meta-programming: what, why and how (perl6advent)
Day 19 – Introspection (perl6advent)
How do I access a module's symbol table dynamically at runtime in Raku? (stackoverflow)
Is there a way to get a list of all known types in a Perl 6 program? (stackoverflow)
Motivation
I would like to automatically generate UML class diagrams for Raku packages.
See the
PlantUML
diagrams
for the Raku package:
ML::StreamsBlendingRecommender.
I considered the steps:
Design parser(s) of the code of classes in software packages made with languages like C++, Java, Kotlin, Raku
Generate the corresponding PlantUML code over the parsing results
(Such parsers might not be that hard to derive. Probably, the work of DrForr provides good starts.)
But given Raku's introspection abilities, instead of parsing Raku code I should be able to "just" traverse package namespaces and classes. (Instead of making a parser.)
There is no "central" dictionary of classes in Raku. And making the question even harder to solve, classes only now about their parent classes and roles they consume. But they do not know about any classes that inherit from them. Or if you look at a role, which other roles and classes consume that role.
Classes and roles in Raku are therefore irresponsible parents :-)
I guess there could be a way to do some trickery in the MOP, but that could have significant performance effects and cause memory leaks (as many temporary classes wouldn't be garbage collected anymore, because the record keeping would keep it alive).
The question :
How can I find programmatically all the classes, grammars, and roles in a Raku package? (Specified with a string.)
is answered with the package UML::Translators.
For example,
.say for get-namespace-classes( 'ML::TriesWithFrequencies' ).map({ $_ ~~ Str ?? $_ !! $_.^name }).sort
# ML::TriesWithFrequencies::LeafProbabilitiesGatherer
# ML::TriesWithFrequencies::ParetoBasedRemover
# ML::TriesWithFrequencies::PathsGatherer
# ML::TriesWithFrequencies::RegexBasedRemover
# ML::TriesWithFrequencies::ThresholdBasedRemover
# ML::TriesWithFrequencies::Trie
# ML::TriesWithFrequencies::TrieTraverse
# ML::TriesWithFrequencies::Trieish
# TRIEROOT
# TRIEVALUE
The package provides the Command Line Interface (CLI) script to-uml-spec that can be used to generate UML specs with the
PlantUML
domain specific language. Here is an example:
to-uml-spec --/methods --/attributes 'Chemistry::Stoichiometry'
#startuml
class Chemistry::Stoichiometry::Grammar <<grammar>> {
}
Chemistry::Stoichiometry::Grammar --|> Grammar
Chemistry::Stoichiometry::Grammar --|> Match
Chemistry::Stoichiometry::Grammar --|> Capture
Chemistry::Stoichiometry::Grammar --|> Chemistry::Stoichiometry::Grammar::ChemicalElement
Chemistry::Stoichiometry::Grammar --|> Chemistry::Stoichiometry::Grammar::ChemicalEquation
Chemistry::Stoichiometry::Grammar --|> NQPMatchRole
class Chemistry::Stoichiometry::ResourceAccess {
}
class Chemistry::Stoichiometry::Actions::MolecularMass {
}
class Chemistry::Stoichiometry::Actions::EquationMatrix {
}
class Chemistry::Stoichiometry::Actions::EquationBalance {
}
Chemistry::Stoichiometry::Actions::EquationBalance --|> Chemistry::Stoichiometry::Actions::EquationMatrix
class Chemistry::Stoichiometry::Actions::WL::System {
}
#enduml
Using CLI and a PlantUML JAR file we can generate UML diagrams like this:
to-uml-spec --/methods --/attributes 'Chemistry::Stoichiometry' | java -jar ~/Downloads/plantuml-1.2022.5.jar -pipe -tjpg > /tmp/myuml.jpg
Related
I have 4 files all in the same directory: main.rakumod, infix_ops.rakumod, prefix_ops.rakumod and script.raku:
main module has a class definition (class A)
*_ops modules have some operator routine definitions to write, e.g., $a1 + $a2 in an overloaded way.
script.raku tries to instantaniate A object(s) and use those user-defined operators.
Why 3 files not 1? Since class definition might be long and separating overloaded operator definitions in files seemed like a good idea for writing tidier code (easier to manage).
e.g.,
# main.rakumod
class A {
has $.x is rw;
}
# prefix_ops.rakumod
use lib ".";
use main;
multi prefix:<++>(A:D $obj) {
++$obj.x;
$obj;
}
and similar routines in infix_ops.rakumod. Now, in script.raku, my aim is to import main module only and see the overloaded operators also available:
# script.raku
use lib ".";
use main;
my $a = A.new(x => -1);
++$a;
but it naturally doesn't see ++ multi for A objects because main.rakumod doesn't know the *_ops.rakumod files as it stands. Is there a way I can achieve this? If I use prefix_ops in main.rakumod, it says 'use lib' may not be pre-compiled perhaps because of circular dependentness
it says 'use lib' may not be pre-compiled
The word "may" is ambiguous. Actually it cannot be precompiled.
The message would be better if it said something to the effect of "Don't put use lib in a module."
This has now been fixed per #codesections++'s comment below.
perhaps because of circular dependentness
No. use lib can only be used by the main program file, the one directly run by Rakudo.
Is there a way I can achieve this?
Here's one way.
We introduce a new file that's used by the other packages to eliminate the circularity. So now we have four files (I've rationalized the naming to stick to A or variants of it for the packages that contribute to the type A):
A-sawn.rakumod that's a role or class or similar:
unit role A-sawn;
Other packages that are to be separated out into their own files use the new "sawn" package and does or is it as appropriate:
use A-sawn;
unit class A-Ops does A-sawn;
multi prefix:<++>(A-sawn:D $obj) is export { ++($obj.x) }
multi postfix:<++>(A-sawn:D $obj) is export { ($obj.x)++ }
The A.rakumod file for the A type does the same thing. It also uses whatever other packages are to be pulled into the same A namespace; this will import symbols from it according to Raku's standard importing rules. And then relevant symbols are explicitly exported:
use A-sawn;
use A-Ops;
sub EXPORT { Map.new: OUTER:: .grep: /'fix:<'/ }
unit class A does A-sawn;
has $.x is rw;
Finally, with this setup in place, the main program can just use A;:
use lib '.';
use A;
my $a = A.new(x => -1);
say $a++; # A.new(x => -1)
say ++$a; # A.new(x => 1)
say ++$a; # A.new(x => 2)
The two main things here are:
Introducing an (empty) A-sawn package
This type eliminates circularity using the technique shown in #codesection's answer to Best Way to Resolve Circular Module Loading.
Raku culture has a fun generic term/meme for techniques that cut through circular problems: "circular saws". So I've used a -sawn suffix of the "sawn" typename as a convention when using this technique.[1]
Importing symbols into a package and then re-exporting them
This is done via sub EXPORT { Map.new: ... }.[2] See the doc for sub EXPORT.
The Map must contain a list of symbols (Pairs). For this case I've grepped through keys from the OUTER:: pseudopackage that refers to the symbol table of the lexical scope immediately outside the sub EXPORT the OUTER:: appears in. This is of course the lexical scope into which some symbols (for operators) have just been imported by the use Ops; statement. I then grep that symbol table for keys containing fix:<; this will catch all symbol keys with that string in their name (so infix:<..., prefix:<... etc.). Alter this code as needed to suit your needs.[3]
Footnotes
[1] As things stands this technique means coming up with a new name that's different from the one used by the consumer of the new type, one that won't conflict with any other packages. This suggests a suffix. I think -sawn is a reasonable choice for an unusual and distinctive and mnemonic suffix. That said, I imagine someone will eventually package this process up into a new language construct that does the work behind the scenes, generating the name and automating away the manual changes one has to make to packages with the shown technique.
[2] A critically important point is that, if a sub EXPORT is to do what you want, it must be placed outside the package definition to which it applies. And that in turn means it must be before a unit package declaration. And that in turn means any use statement relied on by that sub EXPORT must appear within the same or outer lexical scope. (This is explained in the doc but I think it bears summarizing here to try head off much head scratching because there's no error message if it's in the wrong place.)
[3] As with the circularity saw aspect discussed in footnote 1, I imagine someone will also eventually package up this import-and-export mechanism into a new construct, or, perhaps even better, an enhancement of Raku's built in use statement.
Hi #hanselmann here is how I would write this (in 3 files / same dir):
Define my class(es):
# MyClass.rakumod
unit module MyClass;
class A is export {
has $.x is rw;
}
Define my operators:
# Prefix_Ops.rakumod
unit module Prefix_Ops;
use MyClass;
multi prefix:<++>(A:D $obj) is export {
++$obj.x;
$obj;
}
Run my code:
# script.raku
use lib ".";
use MyClass;
use Prefix_Ops;
my $a = A.new(x => -1);
++$a;
say $a.x; #0
Taking my cue from the Module docs there are a couple of things I am doing different:
Avoiding the use of main (or Main, or MAIN) --- I am wary that MAIN is a reserved name and just want to keep clear of engaging any of that (cool) machinery
Bringing in the unit module declaration at the top of each 'rakumod' file ... it may be possible to use bare files in Raku ... but I have never tried this and would say that it is not obvious from the docs that it is even possible, or supported
Now since I wanted this to work first time you will note that I use the same file name and module name ... again it may be possible to do that differently (multiple modules in one file and so on) ... but I have not tried that either
Using the 'is export' trait where I want my script to be able to use these definitions ... as you will know from close study of the docs ;-) is that each module has it's own namespace (the "stash") and we need export to shove the exported definitions into the namespace of the script
As #raiph mentions you only need the script to define the module library location
Since you want your prefix multi to "know" about class A then you also need to use MyClass in the Prefix_Ops module
Anyway, all-in-all, I think that the raku module system exemplifies the unique combination of "easy things easy and hard thinks doable" ... all I had to do with your code (which was very close) was tweak a few filenames and sprinkle in some concise concepts like 'unit module' and 'is export' and it really does not look much different since raku keeps all the import/export machinery under the surface like the swan gliding over the river...
The Perl6 standard grammar is relatively large. Although this facilitates expression once mastered, it creates a barrier to mastery. For instance, core constructs often have multiple forms supporting different programming paradigms. A basic example is the variety of syntaxes for creating Pairs:
Pair.new('key', 'value'); # The canonical way
'key' => 'value'; # this...
:key<value>; # ...means the same as this
:key<value1 value2>; # But this is key => <value1 value2>
:foo(127); # short for foo => 127
:127foo; # the same foo => 127
Note, in particular, the comment on the first form: "The canonical way".
Another example is the documentation for method make:
This is just a little sugar for $/.made = $ast which is a very common operation in actions.
Is there a canonical form that one may output for a Perl6 program so that, having mastered the canonical sub-grammar, one may inspect any Perl6 program in that form to comprehend it?
I'd say that the Perl6 grammar (in particular the roast) is the canon, so all those forms are sort of 'canonical'. That comment refers to what is actually happening when any of the other forms are compiled/executed. The .new() method for the Pair class gets called to create the new Pair object. That happens, behind the scenes, so to speak, regardless of which of the options you use. The other syntaxes are just easier ways to express the same thing.
You might find the .perl() method helpful. It will describe the way any variable can be expressed in Perl:
> Pair.new('key', 'value').perl
:key("value")
> ('key' => 'value').perl
:key("value")
> (:key<value>).perl
:key("value")
I'm looking for a way to generate a parser from a grammar file (BNF/BNF-like) that will populate an AST. However, I'm also looking to automatically generate the various AST classes in a way that is developer-readable.
Example:
For the following grammar file
expressions = expression+;
expression = CONST | math_expression;
math_expression = add_expression | substract_expression;
add_expression = expression PLUS expression;
substract_expression = expression MINUS expression;
CONST: ('0'..'9')+;
PLUS: '+';
MINUS: '-';
I would like to have the following Java classes generated (with example of what I expect their fields to be):
class Expressions {List<Expression> expression};
class Expression {String const; MathExpression mathExpression;} //only one should be filled.
class MathExpression {AddExpression addExpression; SubstractExpression substractExpression;}
class AddExpression {Expression expression1; Expression expression2;}
class SubstractExpression {Expression expression1; Expression expression2;}
And, in runtime, I would like the expression "1+1-2" to generate the following object graph to represent the AST:
Expressions(Expression(MathExpression(AddExpression(1, SubstractExpression(1, 2)))))
(never mind operator precedence).
I've been exploring DSL parser generators (JavaCC/ANTLR and friends) and the closest thing I could find was using ANTLR to generate a listener class with "enterExpression" and "leaveExpression" style methods. I found a somewhat similar code generated using JavaCC and jjtree using "multi" - but it's extremely awkward and difficult to use.
My grammar needs are somewhat simple - and I would like to automate the AST object graph creation as much as possible.
Any hints?
If you want a lot of support for DSL construction, ANTLR and JavaCC probably aren't the way to go. They provide parsing, some support of building trees... and after that you're on your own. But, as you've figured out, its a lot of work to design your own trees, work out the details, and you're hardly done with the DSL at that point; you still can't use it.
There are more complete solutions out there: JetBrains MPS, Xtext, Spoofax, DMS. All of them provide ways to define a DSL, convert it to an internal form ("build trees"), and provide support for code generation. The first three have integrated IDE support and are intended for "small" DSLs; DMS does not, but handles real languages like C++ as well as DSLs. I think the first three are open source; DMS is commercial (I'm the party behind DMS).
Markus Voelter has just released an online book on DSL Engineering, available for your idea of a donation. He goes into great detail on MPS, XText, Spoofax but none on DMS. He tells you what you need to know and what you need to do; based on my skim of the book, it is pretty extensive. You probably are not going to get off on "simple"; DSLs have a lot of semantic complexity and the supporting machinery is difficult.
I know the author, have huge respect for his skills in this arena, and have co-lectured at technical summer skills with him including having some nice beer. Otherwise I have nothing to do this book.
Let's suppose I have two grammars (and that there is a Lexer defined somewhere), ParserA and ParserB.
In ParserA I have the following code:
parser grammar ParserA;
classDeclaration
scope {
ST mList;
}
...
ParserB is something like:
parser grammar ParserB;
import ParserA;
methodDeclaration : something something { $classDeclaration::mList.add(...) };
The code in the action will fail to compile (by javac) since classDeclaration is in a different class (and file). Any tips on how to fix it?
Any tips on how to fix it?
No, there's (AFAIK) no ANTLR shortcut here: there's no communication possible between imported grammars (either by using scopes or by providing parameters to imported grammar rules).
Aside from getting any real work done, I have an itch. My itch is to write a view engine that closely mimics a template system from another language (Template Toolkit/Perl). This is one of those if I had time/do it to learn something new kind of projects.
I've spent time looking at CoCo/R and ANTLR, and honestly, it makes my brain hurt, but some of CoCo/R is sinking in. Unfortunately, most of the examples are about creating a compiler that reads source code, but none seem to cover how to create a processor for templates.
Yes, those are the same thing, but I can't wrap my head around how to define the language for templates where most of the source is the html, rather than actual code being parsed and run.
Are there any good beginner resources out there for this kind of thing? I've taken a ganer at Spark, which didn't appear to have the grammar in the repo.
Maybe that is overkill, and one could just test-replace template syntax with c# in the file and compile it. http://msdn.microsoft.com/en-us/magazine/cc136756.aspx#S2
If you were in my shoes and weren't a language creating expert, where would you start?
The Spark grammar is implemented with a kind-of-fluent domain specific language.
It's declared in a few layers. The rules which recognize the html syntax are declared in MarkupGrammar.cs - those are based on grammar rules copied directly from the xml spec.
The markup rules refer to a limited subset of csharp syntax rules declared in CodeGrammar.cs - those are a subset because Spark only needs to recognize enough csharp to adjust single-quotes around strings to double-quotes, match curley braces, etc.
The individual rules themselves are of type ParseAction<TValue> delegate which accept a Position and return a ParseResult. The ParseResult is a simple class which contains the TValue data item parsed by the action and a new Position instance which has been advanced past the content which produced the TValue.
That isn't very useful on it's own until you introduce a small number of operators, as described in Parsing expression grammar, which can combine single parse actions to build very detailed and robust expressions about the shape of different syntax constructs.
The technique of using a delegate as a parse action came from a Luke H's blog post Monadic Parser Combinators using C# 3.0. I also wrote a post about Creating a Domain Specific Language for Parsing.
It's also entirely possible, if you like, to reference the Spark.dll assembly and inherit a class from the base CharGrammar to create an entirely new grammar for a particular syntax. It's probably the quickest way to start experimenting with this technique, and an example of that can be found in CharGrammarTester.cs.
Step 1. Use regular expressions (regexp substitution) to split your input template string to a token list, for example, split
hel<b>lo[if foo]bar is [bar].[else]baz[end]world</b>!
to
write('hel<b>lo')
if('foo')
write('bar is')
substitute('bar')
write('.')
else()
write('baz')
end()
write('world</b>!')
Step 2. Convert your token list to a syntax tree:
* Sequence
** Write
*** ('hel<b>lo')
** If
*** ('foo')
*** Sequence
**** Write
***** ('bar is')
**** Substitute
***** ('bar')
**** Write
***** ('.')
*** Write
**** ('baz')
** Write
*** ('world</b>!')
class Instruction {
}
class Write : Instruction {
string text;
}
class Substitute : Instruction {
string varname;
}
class Sequence : Instruction {
Instruction[] items;
}
class If : Instruction {
string condition;
Instruction then;
Instruction else;
}
Step 3. Write a recursive function (called the interpreter), which can walk your tree and execute the instructions there.
Another, alternative approach (instead of steps 1--3) if your language supports eval() (such as Perl, Python, Ruby): use a regexp substitution to convert the template to an eval()-able string in the host language, and run eval() to instantiate the template.
There are sooo many thing to do. But it does work for on simple GET statement plus a test. That's a start.
http://github.com/claco/tt.net/
In the end, I already had too much time in ANTLR to give loudejs' method a go. I wanted to spend a little more time on the whole process rather than the parser/lexer. Maybe in version 2 I can have a go at the Spark way when my brain understands things a little more.
Vici Parser (formerly known as LazyParser.NET) is an open-source tokenizer/template parser/expression parser which can help you get started.
If it's not what you're looking for, then you may get some ideas by looking at the source code.