Dynamic/Static scope with Deep/Shallow binding (exercises) - dynamic

I'm studying dynamic/static scope with deep/shallow binding and running code manually to see how these different scopes/bindings actually work. I read the theory and googled some example exercises and the ones I found are very simple (like this one which was very helpful with dynamic scoping) But I'm having trouble understanding how static scope works.
Here I post an exercise I did to check if I got the right solution:
considering the following program written in pseudocode:
int u = 42;
int v = 69;
int w = 17;
proc add( z:int )
u := v + u + z
proc bar( fun:proc )
int u := w;
fun(v)
proc foo( x:int, w:int )
int v := x;
bar(add)
main
foo(u,13)
print(u)
end;
What is printed to screen
a) using static scope? answer=180
b) using dynamic scope and deep binding? answer=69 (sum for u = 126 but it's foo's local v, right?)
c) using dynamic scope and shallow binding? answer=69 (sum for u = 101 but it's foo's local v, right?)
PS: I'm trying to practice doing some exercises like this if you know where I can find these types of problems (preferable with solutions) please give the link, thanks!

Your answer for lexical (static) scope is correct. Your answers for dynamic scope are wrong, but if I'm reading your explanations right, it's because you got confused between u and v, rather than because of any real misunderstanding about how deep and shallow binding work. (I'm assuming that your u/v confusion was just accidental, and not due to a strange confusion about values vs. references in the call to foo.)
a) using static scope? answer=180
Correct.
b) using dynamic scope and deep binding? answer=69 (sum for u = 126 but it's foo's local v, right?)
Your parenthetical explanation is right, but your answer is wrong: u is indeed set to 126, and foo indeed localizes v, but since main prints u, not v, the answer is 126.
c) using dynamic scope and shallow binding? answer=69 (sum for u = 101 but it's foo's local v, right?)
The sum for u is actually 97 (42+13+42), but since bar localizes u, the answer is 42. (Your parenthetical explanation is wrong for this one — you seem to have used the global variable w, which is 17, in interpreting the statement int u := w in the definition of bar; but that statement actually refers to foo's local variable w, its second parameter, which is 13. But that doesn't actually affect the answer. Your answer is wrong for this one only because main prints u, not v.)
For lexical scope, it's pretty easy to check your answers by translating the pseudo-code into a language with lexical scope. Likewise dynamic scope with shallow binding. (In fact, if you use Perl, you can test both ways almost at once, since it supports both; just use my for lexical scope, then do a find-and-replace to change it to local for dynamic scope. But even if you use, say, JavaScript for lexical scope and Bash for dynamic scope, it should be quick to test both.)
Dynamic scope with deep binding is much trickier, since few widely-deployed languages support it. If you use Perl, you can implement it manually by using a hash (an associative array) that maps from variable-names to scalar-refs, and passing this hash from function to function. Everywhere that the pseudocode declares a local variable, you save the existing scalar-reference in a Perl lexical variable, then put the new mapping in the hash; and at the end of the function, you restore the original scalar-reference. To support the binding, you create a wrapper function that creates a copy of the hash, and passes that to its wrapped function. Here is a dynamically-scoped, deeply-binding implementation of your program in Perl, using that approach:
#!/usr/bin/perl -w
use warnings;
use strict;
# Create a new scalar, initialize it to the specified value,
# and return a reference to it:
sub new_scalar($)
{ return \(shift); }
# Bind the specified procedure to the specified environment:
sub bind_proc(\%$)
{
my $V = { %{+shift} };
my $f = shift;
return sub { $f->($V, #_); };
}
my $V = {};
$V->{u} = new_scalar 42; # int u := 42
$V->{v} = new_scalar 69; # int v := 69
$V->{w} = new_scalar 17; # int w := 17
sub add(\%$)
{
my $V = shift;
my $z = $V->{z}; # save existing z
$V->{z} = new_scalar shift; # create & initialize new z
${$V->{u}} = ${$V->{v}} + ${$V->{u}} + ${$V->{z}};
$V->{z} = $z; # restore old z
}
sub bar(\%$)
{
my $V = shift;
my $fun = shift;
my $u = $V->{u}; # save existing u
$V->{u} = new_scalar ${$V->{w}}; # create & initialize new u
$fun->(${$V->{v}});
$V->{u} = $u; # restore old u
}
sub foo(\%$$)
{
my $V = shift;
my $x = $V->{x}; # save existing x
$V->{x} = new_scalar shift; # create & initialize new x
my $w = $V->{w}; # save existing w
$V->{w} = new_scalar shift; # create & initialize new w
my $v = $V->{v}; # save existing v
$V->{v} = new_scalar ${$V->{x}}; # create & initialize new v
bar %$V, bind_proc %$V, \&add;
$V->{v} = $v; # restore old v
$V->{w} = $w; # restore old w
$V->{x} = $x; # restore old x
}
foo %$V, ${$V->{u}}, 13;
print "${$V->{u}}\n";
__END__
and indeed it prints 126. It's obviously messy and error-prone, but it also really helps you understand what's going on, so for educational purposes I think it's worth it!

Simple and deep binding are Lisp interpreter viewpoints of the pseudocode. Scoping is just pointer arithmetic. Dynamic scope and static scope are the same if there are no free variables.
Static scope relies on a pointer to memory. Empty environments hold no symbol to value associations; denoted by word "End." Each time the interpreter reads an assignment, it makes space for association between a symbol and value.
The environment pointer is updated to point to the last association constructed.
env = End
env = [u,42] -> End
env = [v,69] -> [u,42] -> End
env = [w,17] -> [v,69] -> [u,42] -> End
Let me record this environment memory location as AAA. In my Lisp interpreter, when meeting a procedure, we take the environment pointer and put it our pocket.
env = [add,[closure,(lambda(z)(setq u (+ v u z)),*AAA*]]->[w,17]->[v,69]->[u,42]->End.
That's pretty much all there is until the procedure add is called. Interestingly, if add is never called, you just cost yourself a pointer.
Suppose the program calls add(8). OK, let's roll. The environment AAA is made current. Environment is ->[w,17]->[v,69]->[u,42]->End.
Procedure parameters of add are added to the front of the environment. The environment becomes [z,8]->[w,17]->[v,69]->[u,42]->End.
Now the procedure body of add is executed. Free variable v will have value 69. Free variable u will have value 42. z will have the value 8.
u := v + u + z
u will be assigned the value of 69 + 42 + 8 becomeing 119.
The environment will reflect this: [z,8]->[w,17]->[v,69]->[u,119]->End.
Assume procedure add has completed its task. Now the environment gets restored to its previous value.
env = [add,[closure,(lambda(z)(setq u (+ v u z)),*AAA*]]->[w,17]->[v,69]->[u,119]->End.
Notice how the procedure add has had a side effect of changing the value of free variable u. Awesome!
Regarding dynamic scoping: it just ensures closure leaves out dynamic symbols, thereby avoiding being captured and becoming dynamic.
Then put assignment to dynamic at top of code. If dynamic is same as parameter name, it gets masked by parameter value passed in.
Suppose I had a dynamic variable called z. When I called add(8), z would have been set to 8 regardless of what I wanted. That's probably why dynamic variables have longer names.
Rumour has it that dynamic variables are useful for things like backtracking, using let Lisp constructs.

Static binding, also known as lexical scope, refers to the scoping mechanism found in most modern languages.
In "lexical scope", the final value for u is neither 180 or 119, which are wrong answers.
The correct answer is u=101.
Please see standard Perl code below to understand why.
use strict;
use warnings;
my $u = 42;
my $v = 69;
my $w = 17;
sub add {
my $z = shift;
$u = $v + $u + $z;
}
sub bar {
my $fun = shift;
$u = $w;
$fun->($v);
}
sub foo {
my ($x, $w) = #_;
$v = $x;
bar( \&add );
}
foo($u,13);
print "u: $u\n";
Regarding shallow binding versus deep binding, both mechanisms date from the former LISP era.
Both mechanisms are meant to achieve dynamic binding (versus lexical scope binding) and therefore they produce identical results !
The differences between shallow binding and deep binding do not reside in semantics, which are identical, but in the implementation of dynamic binding.
With deep binding, variable bindings are set within a stack as "varname => varvalue" pairs.
The value of a given variable is retrieved from traversing the stack from top to bottom until a binding for the given variable is found.
Updating the variable consists in finding the binding in the stack and updating the associated value.
On entering a subroutine, a new binding for each actual parameter is pushed onto the stack, potentially hiding an older binding which is therefore no longer accessible wrt the retrieving mechanism described above (that stops at the 1st retrieved binding).
On leaving the subroutine, bindings for these parameters are simply popped from the binding stack, thus re-enabling access to the former bindings.
Please see the the code below for a Perl implementation of deep-binding dynamic scope.
use strict;
use warnings;
use utf8;
##
# Dynamic-scope deep-binding implementation
my #stack = ();
sub bindv {
my ($varname, $varval);
unshift #stack, [ $varname => $varval ]
while ($varname, $varval) = splice #_, 0, 2;
return $varval;
}
sub unbindv {
my $n = shift || 1;
shift #stack while $n-- > 0;
}
sub getv {
my $varname = shift;
for (my $i=0; $i < #stack; $i++) {
return $stack[$i][1]
if $varname eq $stack[$i][0];
}
return undef;
}
sub setv {
my ($varname, $varval) = #_;
for (my $i=0; $i < #stack; $i++) {
return $stack[$i][1] = $varval
if $varname eq $stack[$i][0];
}
return bindv($varname, $varval);
}
##
# EXERCICE
bindv( u => 42,
v => 69,
w => 17,
);
sub add {
bindv(z => shift);
setv(u => getv('v')
+ getv('u')
+ getv('z')
);
unbindv();
}
sub bar {
bindv(fun => shift);
setv(u => getv('w'));
getv('fun')->(getv('v'));
unbindv();
}
sub foo {
bindv(x => shift,
w => shift,
);
setv(v => getv('x'));
bar( \&add );
unbindv(2);
}
foo( getv('u'), 13);
print "u: ", getv('u'), "\n";
The result is u=97
Nevertheless, this constant traversal of the binding stack is costly : 0(n) complexity !
Shallow binding brings a wonderful O(1) enhanced performance over the previous implementation !
Shallow binding is improving the former mechanism by assigning each variable its own "cell", storing the value of the variable within the cell.
The value of a given variable is simply retrieved from the variable's
cell (using a hash table on variable names, we achieve a
0(1) complexity for accessing variable's values!)
Updating the variable's value is simply storing the value into the
variable's cell.
Creating a new binding (entering subs) works by pushing the old value
of the variable (a previous binding) onto the stack, and storing the
new local value in the value cell.
Eliminating a binding (leaving subs) works by popping the old value
off the stack into the variable's value cell.
Please see the the code below for a trivial Perl implementation of shallow-binding dynamic scope.
use strict;
use warnings;
our $u = 42;
our $v = 69;
our $w = 17;
our $z;
our $fun;
our $x;
sub add {
local $z = shift;
$u = $v + $u + $z;
}
sub bar {
local $fun = shift;
$u = $w;
$fun->($v);
}
sub foo {
local $x = shift;
local $w = shift;
$v = $x;
bar( \&add );
}
foo($u,13);
print "u: $u\n";
As you shall see, the result is still u=97
As a conclusion, remember two things :
shallow binding produces the same results as deep binding, but runs faster, since there is never a need to search for a binding.
The problem is not shallow binding versus deep binding versus
static binding BUT lexical scope versus dynamic scope (implemented either with deep or shallow binding).

Related

IIFE alternatives in Raku

In Ruby I can group together some lines of code like so with a begin block:
x = begin
puts "Hi!"
a = 2
b = 3
a + b
end
puts x # 5
it's immediately evaluated and its value is the last value of the block (a + b here) (Javascripters do a similar thing with IIFEs)
What are the ways to do this in Raku? Is there anything smoother than:
my $x = ({
say "Hi!";
my $a = 2;
my $b = 3;
$a + $b;
})();
say $x; # 5
Insert a do in front of the block. This tells Raku to:
Immediately do whatever follows the do on its right hand side;
Return the value to the do's left hand side:
my $x = do {
put "Hi!";
my $a = 2;
my $b = 3;
$a + $b;
}
That said, one rarely needs to use do.
Instead, there are many other IIFE forms in Raku that just work naturally without fuss. I'll mention just two because they're used extensively in Raku code:
with whatever { .foo } else { .bar }
You might think I'm being silly, but those are two IIFEs. They form lexical scopes, have parameter lists, bind from arguments, the works. Loads of Raku constructs work like that.
In the above case where I haven't written an explicit parameter list, this isn't obvious. The fact that .foo is called on whatever if whatever is defined, and .bar is called on it if it isn't, is both implicit and due to the particular IIFE calling behavior of with.
See also if, while, given, and many, many more.
What's going on becomes more obvious if you introduce an explicit parameter list with ->:
for whatever -> $a, $b { say $a + $b }
That iterates whatever, binding two consecutive elements from it to $a and $b, until whatever is empty. If it has an odd number of elements, one might write:
for whatever -> $a, $b? { say $a + $b }
And so on.
Bottom line: a huge number of occurrences of {...} in Raku are IIFEs, even if they don't look like it. But if they're immediately after an =, Raku defaults to assuming you want to assign the lambda rather than immediately executing it, so you need to insert a do in that particular case.
Welcome to Raku!
my $x = BEGIN {
say "Hi!";
my $a = 2;
my $b = 3;
$a + $b;
}
I guess the common ancestry of Raku and Ruby shows :-)
Also note that to create a constant, you can also use constant:
my constant $x = do {
say "Hi!";
my $a = 2;
my $b = 3;
$a + $b;
}
If you can have a single statement, you can leave off the braces:
my $x = BEGIN 2 + 3;
or:
my constant $x = 2 + 3;
Regarding blocks: if they are in sink context (similar to "void" context in some languages), then they will execute just like that:
{
say "Executing block";
}
No need to explicitely call it: it will be called for you :-)

Variable getting overwritten in for loop

In a for loop, a different variable is assigned a value. The variable which has already been assigned a value is getting assigned the value from next iteration. At the end, both variable have the same value.
The code is for validating data in a file. When I print the values, it prints correct value for first iteration but in the next iteration, the value assigned in first iteration is changed.
When I print the value of $value3 and $value4 in the for loop, it shows null for $value4 and some value for $value3 but in the next iteration, the value of $value3 is overwritten by the value of $value4
I have tried on rakudo perl 6.c
my $fh= $!FileName.IO.open;
my $fileObject = FileValidation.new( file => $fh );
for (3,4).list {
put "Iteration: ", $_;
if ($_ == 4) {
$value4 := $fileObject.FileValidationFunction(%.ValidationRules{4}<ValidationFunction>, %.ValidationRules{4}<Arguments>);
}
if ($_ == 3) {
$value3 := $fileObject.FileValidationFunction(%.ValidationRules{3}<ValidationFunction>, %.ValidationRules{3}<Arguments>);
}
$fh.seek: SeekFromBeginning;
}
TL;DR It's not possible to confidently answer your question as it stands. This is a nanswer -- an answer in the sense I'm writing it as one but also quite possibly not an answer in the sense of helping you fix your problem.
Is it is rw? A first look.
The is rw trait on a routine or class attribute means it returns a container that contains a value rather than just returning a value.
If you then alias that container then you can get the behavior you've described.
For example:
my $foo;
sub bar is rw { $foo = rand }
my ($value3, $value4);
$value3 := bar;
.say for $value3, $value4;
$value4 := bar;
.say for $value3, $value4;
displays:
0.14168492246366005
(Any)
0.31843665763839857
0.31843665763839857
This isn't a bug in the language or compiler. It's just P6 code doing what it's supposed to do.
A longer version of the same thing
Perhaps the above is so far from your code it's disorienting. So here's the same thing wrapped in something like the code you provided.
spurt 'junk', 'junk';
class FileValidation {
has $.file;
has $!foo;
method FileValidationFunction ($,$) is rw { $!foo = rand }
}
class bar {
has $!FileName = 'junk';
has %.ValidationRules =
{ 3 => { ValidationFunction => {;}, Arguments => () },
4 => { ValidationFunction => {;}, Arguments => () } }
my ($value3, $value4);
method baz {
my $fh= $!FileName.IO.open;
my $fileObject = FileValidation.new( file => $fh );
my ($value3, $value4);
for (3,4).list {
put "Iteration: ", $_;
if ($_ == 4) {
$value4 := $fileObject.FileValidationFunction(
%.ValidationRules{4}<ValidationFunction>, %.ValidationRules{4}<Arguments>);
}
if ($_ == 3) {
$value3 := $fileObject.FileValidationFunction(
%.ValidationRules{3}<ValidationFunction>, %.ValidationRules{3}<Arguments>);
}
$fh.seek: SeekFromBeginning;
.say for $value3, $value4
}
}
}
bar.new.baz
This outputs:
Iteration: 3
0.5779679442816953
(Any)
Iteration: 4
0.8650280000277686
0.8650280000277686
Is it is rw? A second look.
Brad and I came up with essentially the same answer (at the same time; I was a minute ahead of Brad but who's counting? I mean besides me? :)) but Brad nicely nails the fix:
One way to avoid aliasing a container is to just use =.
(This is no doubt also why #ElizabethMattijsen++ asked about trying = instead of :=.)
You've commented that changing from := to = made no difference.
But presumably you didn't change from := to = throughout your entire codebase but rather just (the equivalent of) the two in the code you've shared.
So perhaps the problem can still be fixed by switching from := to =, but in some of your code elsewhere. (That said, don't just globally replace := with =. Instead, make sure you understand their difference and then change them as appropriate. You've got a test suite, right? ;))
How to move forward if you're still stuck
Right now your question has received several upvotes and no downvotes and you've got two answers (that point to the same problem).
But maybe our answers aren't good enough.
If so...
The addition of the reddit comment, and trying = instead of :=, and trying the latest compiler, and commenting on those things, leaves me glad I didn't downvote your question, but I haven't upvoted it yet and there's a reason for that. It's because your question is still missing a Minimal Reproducible Example.
You responded to my suggestion about producing an MRE with:
The problem is that I am not able to replicate this in a simpler environment
I presumed that's your situation, but as you can imagine, that means we can't confidently replicate it at all. That may be the way you prefer to go for reasons but it goes against SO guidance (in the link above) and if the current answers aren't adequate then the sensible way forward is for you to do what it takes to share code that reproduces your problem.
If it's large, please don't just paste it into your question but instead link to it. Perhaps you can set it up on glot.io using the + button to use multiple files (up to 6 I think, plus there's a standard input too). If not, perhaps gist it via, say, gist.github.com, and if I can I'll set it up on glot.io for you.
What is probably happening is that you are returning a container rather than a value, then aliasing the container to a variable.
class Foo {
has $.a is rw;
}
my $o = Foo.new( a => 1 );
my $old := $o.a;
say $old; # 1
$o.a = 2;
say $old; # 2
One way to avoid aliasing a container is to just use =.
my $old = $o.a;
say $old; # 1
$o.a = 2;
say $old; # 1
You could also decontainerize the value using either .self or .<>
my $old := $o.a.<>;
say $old; # 1
$o.a = 2;
say $old; # 1
(Note that .<> above could be .self or just <>.)

(Identifier) terms vs. constants vs. null signature routines

Identifier terms are defined in the documentation alongside constants, with pretty much the same use case, although terms compute their value in run time while constants get it in compile time. Potentially, that could make terms use global variables, but that's action at a distance and ugly, so I guess that's not their use case.
OTOH, they could be simply routines with null signature:
sub term:<þor> { "Is mighty" }
sub Þor { "Is mighty" }
say þor, Þor;
But you can already define routines with null signature. You can sabe, however, the error when you write:
say Þor ~ Þor;
Which would produce a many positionals passed; expected 0 arguments but got 1, unlike the term. That seems however a bit farfetched and you can save the trouble by just adding () at the end.
Another possible use case is defying the rules of normal identifiers
sub term:<✔> { True }
say ✔; # True
Are there any other use cases I have missed?
Making zero-argument subs work as terms will break the possibility to post-declare subs, since finding a sub after having parsed usages of it would require re-parsing of earlier code (which the perl 6 language refuses to do, "one-pass parsing" and all that) if the sub takes no arguments.
Terms are useful in combination with the ternary operator:
$ perl6 -e 'sub a() { "foo" }; say 42 ?? a !! 666'
===SORRY!=== Error while compiling -e
Your !! was gobbled by the expression in the middle; please parenthesize
$ perl6 -e 'sub term:<a> { "foo" }; say 42 ?? a !! 666'
foo
Constants are basically terms. So of course they are grouped together.
constant foo = 12;
say foo;
constant term:<bar> = 36;
say bar;
There is a slight difference because term:<…> works by modifying the parser. So it takes precedence.
constant fubar = 38;
constant term:<fubar> = 45;
say fubar; # 45
The above will print 45 regardless of which constant definition comes first.
Since term:<…> takes precedence the only way to get at the other value is to use ::<fubar> to directly access the symbol table.
say ::<fubar>; # 38
say ::<term:<fubar>>; # 45
There are two main use-cases for term:<…>.
One is to get a subroutine to be parsed similarly to a constant or sigilless variable.
sub fubar () { 'fubar'.comb.roll }
# say( fubar( prefix:<~>( 4 ) ) );
say fubar ~ 4; # ERROR
sub term:<fubar> () { 'fubar'.comb.roll }
# say( infix:<~>( fubar, 4 ) );
say fubar ~ 4;
The other is to have a constant or sigiless variable be something other than an a normal identifier.
my \✔ = True; # ERROR: Malformed my
my \term:<✔> = True;
say ✔;
Of course both use-cases can be combined.
sub term:<✔> () { True }
Perl 5 allows subroutines to have an empty prototype (different than a signature) which will alter how it gets parsed. The main purpose of prototypes in Perl 5 is to alter how the code gets parsed.
use v5;
sub fubar () { ord [split('','fubar')]->[rand 5] }
# say( fubar() + 4 );
say fubar + 4; # infix +
use v5;
sub fubar { ord [split('','fubar')]->[rand 5] }
# say( fubar( +4 ) );
say fubar + 4; # prefix +
Perl 6 doesn't use signatures the way Perl 5 uses prototypes. The main way to alter how Perl 6 parses code is by using the namespace.
use v6;
sub fubar ( $_ ) { .comb.roll }
sub term:<fubar> () { 'fubar'.comb.roll }
say fubar( 'zoo' ); # `z` or `o` (`o` is twice as likely)
say fubar; # `f` or `u` or `b` or `a` or `r`
sub prefix:<✔> ( $_ ) { "checked $_" }
say ✔ 'under the bed'; # checked under the bed
Note that Perl 5 doesn't really have constants, they are just subroutines with an empty prototype.
use v5;
use constant foo => 12;
use v5;
sub foo () { 12 } # ditto
(This became less true after 5.16)
As far as I know all of the other uses of prototypes have been superseded by design decisions in Perl 6.
use v5;
sub foo (&$) { $_[0]->($_[1]) }
say foo { 100 + $_[0] } 5; # 105;
That block is seen as a sub lambda because of the prototype of the foo subroutine.
use v6;
# sub foo ( &f, $v ) { f $v }
sub foo { #_[0].( #_[1] ) }
say foo { 100 + #_[0] }, 5; # 105
In Perl 6 a block is seen as a lambda if a term is expected. So there is no need to alter the parser with a feature like a prototype.
You are asking for exactly one use of prototypes to be brought back even though there is already a feature that covers that use-case.
Doing so would be a special-case. Part of the design ethos of Perl 6 is to limit the number of special-cases.
Other versions of Perl had a wide variety of special-cases, and it isn't always easy to remember them all.
Don't get me wrong; the special-cases in Perl 5 are useful, but Perl 6 has for the most part made them general-cases.

Binding of private attributes: nqp::bindattr vs :=

I'm trying to find how the binding operation works on attributes and what makes it so different from nqp::bindattr. Consider the following example:
class Foo {
has #!foo;
submethod TWEAK {
my $fval = [<a b c>];
use nqp;
nqp::bindattr( nqp::decont(self), $?CLASS, '#!foo',
##!foo :=
Proxy.new(
FETCH => -> $ { $fval },
STORE => -> $, $v { $fval = $v }
)
);
}
method check {
say #!foo.perl;
}
}
my $inst = Foo.new;
$inst.check;
It prints:
$["a", "b", "c"]
Replacing nqp::bindattr with the binding operator from the comment gives correct output:
["a", "b", "c"]
Similarly, if foo is a public attribute and accessor is used the output would be correct too due to deconterisation taking place within the accessor.
I use similar code in my AttrX::Mooish module where use of := would overcomplicate the implementation. So far, nqp::bindattr did the good job for me until the above problem arised.
I tried tracing down Rakudo's internals looking for := implementation but without any success so far. I would ask here either for an advise as to how to simulate the operator or where in the source to look for its implementation.
Before I dig into the answer: most things in this post are implementation-defined, and the implementation is free to define them differently in the future.
To find out what something (naively) compiles into under Rakudo Perl 6, use the --target=ast option (perl6 --target=ast foo.p6). For example, the bind in:
class C {
has $!a;
submethod BUILD() {
my $x = [1,2,3];
$!a := $x
}
}
Comes out as:
- QAST::Op(bind) :statement_id<7>
- QAST::Var(attribute $!a) <wanted> $!a
- QAST::Var(lexical self)
- QAST::WVal(C)
- QAST::Var(lexical $x) $x
While switching it for #!a like here:
class C {
has #!a;
submethod BUILD() {
my $x = [1,2,3];
#!a := $x
}
}
Comes out as:
- QAST::Op(bind) :statement_id<7>
- QAST::Var(attribute #!a) <wanted> #!a
- QAST::Var(lexical self)
- QAST::WVal(C)
- QAST::Op(p6bindassert)
- QAST::Op(decont)
- QAST::Var(lexical $x) $x
- QAST::WVal(Positional)
The decont instruction is the big difference here, and it will take the contents of the Proxy by calling its FETCH, thus why the containerization is gone. Thus, you can replicate the behavior by inserting nqp::decont around the Proxy, although that rather begs the question of what the Proxy is doing there if the correct answer is obtained without it!
Both := and = are compiled using case analysis (namely, by looking at what is on the left hand side). := only works for a limited range of simple expressions on the left; it is a decidedly low-level operator. By contrast, = falls back to a sub call if the case analysis doesn't come up with a more efficient form to emit, though in most cases it manages something better.
The case analysis for := inserts a decont when the target is a lexical or attribute with sigil # or %, since - at a Perl 6 level - having an item bound to an # or % makes no sense. Using nqp::bindattr is going a level below Perl 6 semantics, and so it's possible to end up with the Proxy bound directly there using that. However, it also violates expectations elsewhere. Don't expect that to go well (but it seems you don't want to do that anyway.)

How do I write a generic memoize function?

I'm writing a function to find triangle numbers and the natural way to write it is recursively:
function triangle (x)
if x == 0 then return 0 end
return x+triangle(x-1)
end
But attempting to calculate the first 100,000 triangle numbers fails with a stack overflow after a while. This is an ideal function to memoize, but I want a solution that will memoize any function I pass to it.
Mathematica has a particularly slick way to do memoization, relying on the fact that hashes and function calls use the same syntax:
triangle[0] = 0;
triangle[x_] := triangle[x] = x + triangle[x-1]
That's it. It works because the rules for pattern-matching function calls are such that it always uses a more specific definition before a more general definition.
Of course, as has been pointed out, this example has a closed-form solution: triangle[x_] := x*(x+1)/2. Fibonacci numbers are the classic example of how adding memoization gives a drastic speedup:
fib[0] = 1;
fib[1] = 1;
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
Although that too has a closed-form equivalent, albeit messier: http://mathworld.wolfram.com/FibonacciNumber.html
I disagree with the person who suggested this was inappropriate for memoization because you could "just use a loop". The point of memoization is that any repeat function calls are O(1) time. That's a lot better than O(n). In fact, you could even concoct a scenario where the memoized implementation has better performance than the closed-form implementation!
You're also asking the wrong question for your original problem ;)
This is a better way for that case:
triangle(n) = n * (n - 1) / 2
Furthermore, supposing the formula didn't have such a neat solution, memoisation would still be a poor approach here. You'd be better off just writing a simple loop in this case. See this answer for a fuller discussion.
I bet something like this should work with variable argument lists in Lua:
local function varg_tostring(...)
local s = select(1, ...)
for n = 2, select('#', ...) do
s = s..","..select(n,...)
end
return s
end
local function memoize(f)
local cache = {}
return function (...)
local al = varg_tostring(...)
if cache[al] then
return cache[al]
else
local y = f(...)
cache[al] = y
return y
end
end
end
You could probably also do something clever with a metatables with __tostring so that the argument list could just be converted with a tostring(). Oh the possibilities.
In C# 3.0 - for recursive functions, you can do something like:
public static class Helpers
{
public static Func<A, R> Memoize<A, R>(this Func<A, Func<A,R>, R> f)
{
var map = new Dictionary<A, R>();
Func<A, R> self = null;
self = (a) =>
{
R value;
if (map.TryGetValue(a, out value))
return value;
value = f(a, self);
map.Add(a, value);
return value;
};
return self;
}
}
Then you can create a memoized Fibonacci function like this:
var memoized_fib = Helpers.Memoize<int, int>((n,fib) => n > 1 ? fib(n - 1) + fib(n - 2) : n);
Console.WriteLine(memoized_fib(40));
In Scala (untested):
def memoize[A, B](f: (A)=>B) = {
var cache = Map[A, B]()
{ x: A =>
if (cache contains x) cache(x) else {
val back = f(x)
cache += (x -> back)
back
}
}
}
Note that this only works for functions of arity 1, but with currying you could make it work. The more subtle problem is that memoize(f) != memoize(f) for any function f. One very sneaky way to fix this would be something like the following:
val correctMem = memoize(memoize _)
I don't think that this will compile, but it does illustrate the idea.
Update: Commenters have pointed out that memoization is a good way to optimize recursion. Admittedly, I hadn't considered this before, since I generally work in a language (C#) where generalized memoization isn't so trivial to build. Take the post below with that grain of salt in mind.
I think Luke likely has the most appropriate solution to this problem, but memoization is not generally the solution to any issue of stack overflow.
Stack overflow usually is caused by recursion going deeper than the platform can handle. Languages sometimes support "tail recursion", which re-uses the context of the current call, rather than creating a new context for the recursive call. But a lot of mainstream languages/platforms don't support this. C# has no inherent support for tail-recursion, for example. The 64-bit version of the .NET JITter can apply it as an optimization at the IL level, which is all but useless if you need to support 32-bit platforms.
If your language doesn't support tail recursion, your best option for avoiding stack overflows is either to convert to an explicit loop (much less elegant, but sometimes necessary), or find a non-iterative algorithm such as Luke provided for this problem.
function memoize (f)
local cache = {}
return function (x)
if cache[x] then
return cache[x]
else
local y = f(x)
cache[x] = y
return y
end
end
end
triangle = memoize(triangle);
Note that to avoid a stack overflow, triangle would still need to be seeded.
Here's something that works without converting the arguments to strings.
The only caveat is that it can't handle a nil argument. But the accepted solution can't distinguish the value nil from the string "nil", so that's probably OK.
local function m(f)
local t = { }
local function mf(x, ...) -- memoized f
assert(x ~= nil, 'nil passed to memoized function')
if select('#', ...) > 0 then
t[x] = t[x] or m(function(...) return f(x, ...) end)
return t[x](...)
else
t[x] = t[x] or f(x)
assert(t[x] ~= nil, 'memoized function returns nil')
return t[x]
end
end
return mf
end
I've been inspired by this question to implement (yet another) flexible memoize function in Lua.
https://github.com/kikito/memoize.lua
Main advantages:
Accepts a variable number of arguments
Doesn't use tostring; instead, it organizes the cache in a tree structure, using the parameters to traverse it.
Works just fine with functions that return multiple values.
Pasting the code here as reference:
local globalCache = {}
local function getFromCache(cache, args)
local node = cache
for i=1, #args do
if not node.children then return {} end
node = node.children[args[i]]
if not node then return {} end
end
return node.results
end
local function insertInCache(cache, args, results)
local arg
local node = cache
for i=1, #args do
arg = args[i]
node.children = node.children or {}
node.children[arg] = node.children[arg] or {}
node = node.children[arg]
end
node.results = results
end
-- public function
local function memoize(f)
globalCache[f] = { results = {} }
return function (...)
local results = getFromCache( globalCache[f], {...} )
if #results == 0 then
results = { f(...) }
insertInCache(globalCache[f], {...}, results)
end
return unpack(results)
end
end
return memoize
Here is a generic C# 3.0 implementation, if it could help :
public static class Memoization
{
public static Func<T, TResult> Memoize<T, TResult>(this Func<T, TResult> function)
{
var cache = new Dictionary<T, TResult>();
var nullCache = default(TResult);
var isNullCacheSet = false;
return parameter =>
{
TResult value;
if (parameter == null && isNullCacheSet)
{
return nullCache;
}
if (parameter == null)
{
nullCache = function(parameter);
isNullCacheSet = true;
return nullCache;
}
if (cache.TryGetValue(parameter, out value))
{
return value;
}
value = function(parameter);
cache.Add(parameter, value);
return value;
};
}
}
(Quoted from a french blog article)
In the vein of posting memoization in different languages, i'd like to respond to #onebyone.livejournal.com with a non-language-changing C++ example.
First, a memoizer for single arg functions:
template <class Result, class Arg, class ResultStore = std::map<Arg, Result> >
class memoizer1{
public:
template <class F>
const Result& operator()(F f, const Arg& a){
typename ResultStore::const_iterator it = memo_.find(a);
if(it == memo_.end()) {
it = memo_.insert(make_pair(a, f(a))).first;
}
return it->second;
}
private:
ResultStore memo_;
};
Just create an instance of the memoizer, feed it your function and argument. Just make sure not to share the same memo between two different functions (but you can share it between different implementations of the same function).
Next, a driver functon, and an implementation. only the driver function need be public
int fib(int); // driver
int fib_(int); // implementation
Implemented:
int fib_(int n){
++total_ops;
if(n == 0 || n == 1)
return 1;
else
return fib(n-1) + fib(n-2);
}
And the driver, to memoize
int fib(int n) {
static memoizer1<int,int> memo;
return memo(fib_, n);
}
Permalink showing output on codepad.org. Number of calls is measured to verify correctness. (insert unit test here...)
This only memoizes one input functions. Generalizing for multiple args or varying arguments left as an exercise for the reader.
In Perl generic memoization is easy to get. The Memoize module is part of the perl core and is highly reliable, flexible, and easy-to-use.
The example from it's manpage:
# This is the documentation for Memoize 1.01
use Memoize;
memoize('slow_function');
slow_function(arguments); # Is faster than it was before
You can add, remove, and customize memoization of functions at run time! You can provide callbacks for custom memento computation.
Memoize.pm even has facilities for making the memento cache persistent, so it does not need to be re-filled on each invocation of your program!
Here's the documentation: http://perldoc.perl.org/5.8.8/Memoize.html
Extending the idea, it's also possible to memoize functions with two input parameters:
function memoize2 (f)
local cache = {}
return function (x, y)
if cache[x..','..y] then
return cache[x..','..y]
else
local z = f(x,y)
cache[x..','..y] = z
return z
end
end
end
Notice that parameter order matters in the caching algorithm, so if parameter order doesn't matter in the functions to be memoized the odds of getting a cache hit would be increased by sorting the parameters before checking the cache.
But it's important to note that some functions can't be profitably memoized. I wrote memoize2 to see if the recursive Euclidean algorithm for finding the greatest common divisor could be sped up.
function gcd (a, b)
if b == 0 then return a end
return gcd(b, a%b)
end
As it turns out, gcd doesn't respond well to memoization. The calculation it does is far less expensive than the caching algorithm. Ever for large numbers, it terminates fairly quickly. After a while, the cache grows very large. This algorithm is probably as fast as it can be.
Recursion isn't necessary. The nth triangle number is n(n-1)/2, so...
public int triangle(final int n){
return n * (n - 1) / 2;
}
Please don't recurse this. Either use the x*(x+1)/2 formula or simply iterate the values and memoize as you go.
int[] memo = new int[n+1];
int sum = 0;
for(int i = 0; i <= n; ++i)
{
sum+=i;
memo[i] = sum;
}
return memo[n];