Let's get straight to the point. I want to create a declarator, let's call it testable, that is the combination of our and is export, so that instead of writing
our $foo is export;
we would simply write
testable $foo;
I'm pretty sure something can be done through the metamodel, but I'm not sure this precise thing (scope and trait) can be done or, for that matter, how.
Related
Is there idiomatic way of applying and assigning object variable method call, but only if it's defined (both method and the result)?
Like using safe call operator .? and defined-or operator //, and with "DRY principle" – using the variable only once in the operation?
Like this (but using another variable feels like cheating):
my $nicevariable = "fobar";
# key step
(my $x := $nicevariable) = $x.?possibly-nonexistent-meth // $x;
say $nicevariable; # => possibly-nonexistent-meth (non-Nil) result or "foobar"
... And avoiding andthen, if possible.
Are you aware of the default trait on variables?
Not sure if it fits your use case, but $some-var.?unknown-method returns Nil, so:
my $nicevariable is default("fobar");
$nicevariable = $nicevariable.?possibly-nonexistent-meth;
say $nicevariable; # fobar
results in $nicevariable being reset to its default value;
I'm not entirely sure what you mean by "using the variable only once in the operation". If Liz's answer qualifies, then it's probably the cleaner way to go.
If not, here's a different approach that avoids naming the variable twice:
my $nicevariable = "foobar";
$nicevariable.=&{.^lookup('possibly-nonexistent-meth')($_)}
This is a bit too cryptic for my tastes; here's what it does: If the method exists, then that's similar to¹ calling &method($nicevariable), which is the same as $nicevariable.method. If the method does not exist, then it's like calling Mu($nicevariable) – that is, coercing $nicevariable into Mu or a subtype of Mu. But since everything is already a subtype of Mu, that's a no-op and just returns $nicevariable.
[1]: Not quite, since &method would be a Sub, but basically.
EDIT:
Actually, that was over-complicating things. Here's a simpler version:
my $nicevariable = "foobar";
$nicevariable.=&{.?possibly-nonexistent-meth // $_}
Not sure why I didn't just have that to begin with…
Assuming the method returns something that is defined, you could do (if I'm understanding the question correctly):
my $x = "foobar";
$x = $_ with $y.?possibly-nonexistent-meth;
$x will remain unchanged if the method didn't exist (or it did exist and returned a type object).
As of this merge you can write:
try { let $foo .= bar }
This is a step short of what I think would have been an ideal solution, which is unfortunately a syntax error (due to a pervasive weakness in Raku's current grammar that I'm guessing is effectively unsolvable, much as I would love it to be solved):
{ let $foo .?= bar } # Malformed postfix call...
(Perhaps I'm imagining things, but I see a glimmer of hope that the above wrinkle (and many like it) will be smoothed over a few years from now. This would be after RakuAST lands for Raku .e, and the grammar gets a hoped for clean up in Raku .f or .g.)
Your question's title is:
Variable re-assign method result if not Nil
My solution does a variable re-assign method if not undefined, which is more general than just Nil.
Then again, your question's body asks for exactly that more general solution:
Is there idiomatic way of applying and assigning object variable method call, but only if it's defined (both method and the result)?
So is my solution an ideal one?
My solution is not idiomatic. But that might well be because of the bug I found, now solved with the merge linked at the start of my answer. I see no reason why it should not become idiomatic, once it's in shipping Rakudos.
The potentially big issue is that the try stores any exception thrown in $! rather than letting it blow up. Perhaps that's OK for a given use case; perhaps not.
Special thanks to you for asking your question, which prompted us to come up with various solutions, which led me to file an issue, which led vrurg to both analyse the problem I encountered and then fix it. :)
Forgive me if this is a kind of silly question, but I've been going through "Programming Elm" and one thing struck me as a little odd: in the text he shows an example of creating a record,
dog = { name = "Tucker", age = 11 }
and then right after that he shows a function that returns a record
haveBirthday d = { name = d.name, age = d.age + 1 }
To me, the syntax for both seems remarkably similar. How does the compiler know which is which? By the + on the right hand side of the function, that implies change, so it has to be a function? By the fact that there's an argument, d? Or is it that the difference between generating a record and a function is quite obvious, and it's just in this case that they seem so alike? Or is it that in some subtle way that I don't yet have the Zen to grasp, they are in fact the same thing? (That is, something like "everything is a function"?)
I've looked at https://elm-lang.org/docs/syntax#functions -- the docs are very user-friendly, but brief. Are there any other resources that give a more didactic view of the syntax (like this book does for Haskell)?
Thanks for any help along the way.
In an imperative language where "side-effects" are the norm, the term "function" is often used to describe what's more appropriately called a procedure or sub-routine. A set of instruction to be executed when called, and where order of execution and re-evaluation is essential because mutation and other side-effects can change anything from anywhere at any time.
In functional programming, however, the notion of a function is closer to the mathematical sense of the term, where its return value is computed entirely based on its arguments. This is especially true for a "pure" functional language like Elm, which normally does not allow "side-effects". That is, effects that interact with the "outside world" without going through the input arguments or return value. In a pure functional language it does not make sense to have a function that does not take any arguments, because it would always do the same thing, and computing the same value again and again is just wasteful. A function with no arguments is effectively just a value. And a function definition and value binding can therefore be distinguished solely based on whether or not it has any arguments.
But there are also many hybrid programming languages. Most functional languages are hybrids in fact, that allow side-effects but still stick close to the mathematical sense of a function. These languages also typically don't have functions without arguments, but use a special type called unit, or (), which has only one value, also called unit or (), which is used to denote a function that takes no significant input, or which returns nothing significant. Since unit has only one value, it carries no significant information.
Many functional languages don't even have functions that take multiple arguments either. In Elm and many other languages, a function takes exactly one argument. No more and no less, ever. You might have seen Elm code which appears to have multiple arguments, but that's all an illusion. Or syntax sugar as it's called in language theoretic lingo.
When you see a function definition like this:
add a b = a + b
that actual translates to this:
add = \a -> \b -> a + b
A function that takes an argument a, then returns another function which takes an argument b, which does the actual computation and returns the result. This is called currying.
Why do this? Because it makes it very convenient to partially apply functions. You can just leave out the last, or last few, arguments, then instead of an error you get a function back which you can fully apply later to get the result. This enables you to do some really handy things.
Let's look at an example. To fully apple add from above we'd just do:
add 2 3
The compiler actually parses this as (add 2) 3, so we've kind of done partial application already, but then immediately applied to another value. But what if we want to add 2 to a whole bunch of things and don't want write add 2 everywhere, because DRY and such? We write a function:
add2ToThings thing =
add 2 thing
(Ok, maybe a little bit contrived, but stay with me)
Partial application allows us to make this even shorter!
add2ToThings =
add 2
You see how that works? add 2 returns a function, and we just give that a name. There have been numerous books written about this marvellous idea in OOP, but they call it "dependency injection" and it's usually slightly more verbose when implemented with OOP techniques.
Anyway, say we have a list of "thing"s, we can get a new list with 2 added to everything by mapping over it like this:
List.map add2ToThings things
But we can do even better! Since add 2 is actually shorter than the name we gave it, we might as well just use it directly:
List.map (add 2) things
Ok, but then say we want to filter out every value that is exactly 5. We can actually partially apply infix operators too, but we have to surround the operator in parentheses to make it behave like an ordinary function:
List.filter ((/=) 5) (List.map (add 2) things)
This is starting to look a bit convoluted though, and reads backwards since we filter after we map. Fortunately we can use Elm's pipe operator |> to clean it up a bit:
things
|> List.map (add 2)
|> List.filter ((/=) 5)
The pipe operator was "discovered" because of partial application. Without that it couldn't have been implemented as an ordinary operator, but would have to be implemented as a special syntax rule in the parser. It's implementation is (essentially) just:
x |> f = f x
It takes an arbitrary argument on its left side and a function on its right side, then applies the function to the argument. And because of partial application we can conveniently get a function to pass in on the right side.
So in three lines of ordinary idiomatic Elm code we've used partial application four times. Without that, and currying, we'd have to write something like:
List.filter (\thing -> thing /= 5) (List.map (\thing -> add 2 thing) things)
Or we might want to write it with some variable bindings to make it more readable:
let
add2ToThings thing =
add 2 thing
thingsWith2Added =
List.map add2ToThings things
thingsWith2AddedAndWithout5 =
List.filter (\thing -> thing /= 5) thingWith2Added
in
thingsWith2AddedAndWithout5
And so that's why functional programming is awesome.
I was playing with this in 2018.01:
my $proc = Proc.new: :out;
my $f = $proc.clone;
$f.spawn: 'ls';
put $f.out.slurp;
It says it can't do it. It's curious that the error message is about a routine I didn't use and a different class:
Cannot resolve caller stdout(Proc::Async: :bin); none of these signatures match:
(Proc::Async:D $: :$bin!, *%_)
(Proc::Async:D $: :$enc, :$translate-nl, *%_)
in block <unit> at proc-out.p6 line 3
Everything inherits a default clone method from Mu, which does a shallow clone, but that doesn't mean that everything makes sense to clone. This especially goes for objects that might hold references to OS-level things, such as Proc or IO::Handle. As the person who designed Proc::Async, I can say for certain that making it do anything useful on clone was not a design consideration. I didn't design Proc, but I suspect the same applies.
As for the error, keep in mind that the Perl 6 standard library is implemented in Perl 6 (a lot like in Java and .Net, but less like Perl 5 where many things that are provided by default go directly to something written in C). In this particular case, Proc is implemented in terms of Proc::Async. Rakudo tries to trim stack traces somewhat to eliminate calls inside of the setting, which is usually a win for the language user, but in cases like this can be a little less helpful. Running Rakudo with the --ll-exception flag provides the full details, and thus makes clearer what is going on.
How can I attach an arbitrary tag to a closure in Scheme?
Here are a couple things I'd like to use this for:
(1) To mark closures that provide an interface to produce a string for what they represent, like what #kud0h asked for here. A general ->string procedure could include code something like this:
(display (if (stringable? x)
(x 'string)
x)
str-port)
(2) More generally, to determine if a closure is an "object" that obeys the rules of a general object interface, or maybe to tell the class of an object (something like what #KPatnode was asking about here).
I can't query a procedure to see if it supports a certain interface by calling it, because if it doesn't support a known interface, calling the procedure will produce unpredictable results, most likely a run-time error.
Chez Scheme has putprop and getprop procedures that allow you to add keys and values to symbols. However, closures can be anonymous, or bound to different symbols, so I'd prefer to attach a calling-convention tag to the closure itself, not a symbol that it's bound to.
The only idea I have right now is to maintain a global hash table of all "stringable" or "object" closures in the system. That seems a little clunky. Is there a simpler, more elegant, or more efficient way?
Racket has applicable structures: you can give a structure type an apply hook to be called if an instance is used as a function.
If you want a more portable solution, you can use a hash table to associate your data with certain procedures. Unless your Scheme provides weak hashtables, though, keep in mind that the hashtable will prevent the procedures from being garbage-collected.
I think you might, instead of tagging procedures per se, want to look at Racket's object system, which has a concept of interfaces. It sounds quite similar to what you're after.
You could go extreme and redefine lambda syntax. Something like this (but untested by me):
(define *properties* '()) ;; example only
(define-syntax lambda
(let-syntax ((sys-lambda
(syntax-rules ()
((_ args body ...)
(lambda args body ...)))))
(syntax-rules ()
((_ args body ...)
(let ((func (sys-lambda args body ...)))
(set! *properties*
(cons (cons func '(NO-PROPERTIES))
*properties*))
func)))))
In my language I can use a class variable in my method when the definition appears below the method. It can also call methods below my method and etc. There are no 'headers'. Take this C# example.
class A
{
public void callMethods() { print(); B b; b.notYetSeen();
public void print() { Console.Write("v = {0}", v); }
int v=9;
}
class B
{
public void notYetSeen() { Console.Write("notYetSeen()\n"); }
}
How should I compile that? what i was thinking is:
pass1: convert everything to an AST
pass2: go through all classes and build a list of define classes/variable/etc
pass3: go through code and check if there's any errors such as undefined variable, wrong use etc and create my output
But it seems like for this to work I have to do pass 1 and 2 for ALL files before doing pass3. Also it feels like a lot of work to do until I find a syntax error (other than the obvious that can be done at parse time such as forgetting to close a brace or writing 0xLETTERS instead of a hex value). My gut says there is some other way.
Note: I am using bison/flex to generate my compiler.
My understanding of languages that handle forward references is that they typically just use the first pass to build a list of valid names. Something along the lines of just putting an entry in a table (without filling out the definition) so you have something to point to later when you do your real pass to generate the definitions.
If you try to actually build full definitions as you go, you would end up having to rescan repatedly, each time saving any references to undefined things until the next pass. Even that would fail if there are circular references.
I would go through on pass one and collect all of your class/method/field names and types, ignoring the method bodies. Then in pass two check the method bodies only.
I don't know that there can be any other way than traversing all the files in the source.
I think that you can get it down to two passes - on the first pass, build the AST and whenever you find a variable name, add it to a list that contains that blocks' symbols (it would probably be useful to add that list to the corresponding scope in the tree). Step two is to linearly traverse the tree and make sure that each symbol used references a symbol in that scope or a scope above it.
My description is oversimplified but the basic answer is -- lookahead requires at least two passes.
The usual approach is to save B as "unknown". It's probably some kind of type (because of the place where you encountered it). So you can just reserve the memory (a pointer) for it even though you have no idea what it really is.
For the method call, you can't do much. In a dynamic language, you'd just save the name of the method somewhere and check whether it exists at runtime. In a static language, you can save it in under "unknown methods" somewhere in your compiler along with the unknown type B. Since method calls eventually translate to a memory address, you can again reserve the memory.
Then, when you encounter B and the method, you can clear up your unknowns. Since you know a bit about them, you can say whether they behave like they should or if the first usage is now a syntax error.
So you don't have to read all files twice but it surely makes things more simple.
Alternatively, you can generate these header files as you encounter the sources and save them somewhere where you can find them again. This way, you can speed up the compilation (since you won't have to consider unchanged files in the next compilation run).
Lastly, if you write a new language, you shouldn't use bison and flex anymore. There are much better tools by now. ANTLR, for example, can produce a parser that can recover after an error, so you can still parse the whole file. Or check this Wikipedia article for more options.