Sometimes, I have a control structure (if, for, ...), and depending on a condition I either want to use the control structure, or only execute the body. As a simple example, I can do the following in C, but it's pretty ugly:
#ifdef APPLY_FILTER
if (filter()) {
#endif
// do something
#ifdef APPLY_FILTER
}
#endif
Also it doesn't work if I only know apply_filter at runtime. Of course, in this case I can just change the code to:
if (apply_filter && filter())
but that doesn't work in the general case of arbitrary control structures. (I don't have a nice example at hand, but recently I had some code that would have benefited a lot from a feature like this.)
Is there any langugage where I can apply conditions to control structures, i.e. have higher-order conditionals? In pseudocode, the above example would be:
<if apply_filter>
if (filter()) {
// ...
}
Or a more complicated example, if a varable is set wrap code in a function and start it as a thread:
<if (run_on_thread)>
void thread() {
<endif>
for (int i = 0; i < 10; i++) {
printf("%d\n", i);
sleep(1);
}
<if (run_on_thread)>
}
start_thread(&thread);
<endif>
(Actually, in this example I could imagine it would even be useful to give the meta condition a name, to ensure that the top and bottom s are in sync.)
I could imagine something like this is a feature in LISP, right?
Any language with first-class functions can pull this off. In fact, your use of "higher-order" is telling; the necessary abstraction will indeed be a higher-order function. The idea is to write a function applyIf which takes a boolean (enabled/disabled), a control-flow operator (really, just a function), and a block of code (any value in the domain of the function); then, if the boolean is true, the operator/function is applied to the block/value, and otherwise the block/value is just run/returned. This will be a lot clearer in code.
In Haskell, for instance, this pattern would be, without an explicit applyIf, written as:
example1 = (if applyFilter then when someFilter else id) body
example2 = (if runOnThread then (void . forkIO) else id) . forM_ [1..10] $ \i ->
print i >> threadDelay 1000000 -- threadDelay takes microseconds
Here, id is just the identity function \x -> x; it always returns its argument. Thus, (if cond then f else id) x is the same as f x if cond == True, and is the same as id x otherwise; and of course, id x is the same as x.
Then you could factor this pattern out into our applyIf combinator:
applyIf :: Bool -> (a -> a) -> a -> a
applyIf True f x = f x
applyIf False _ x = x
-- Or, how I'd probably actually write it:
-- applyIf True = id
-- applyIf False = flip const
-- Note that `flip f a b = f b a` and `const a _ = a`, so
-- `flip const = \_ a -> a` returns its second argument.
example1' = applyIf applyFilter (when someFilter) body
example2' = applyIf runOnThread (void . forkIO) . forM_ [1..10] $ \i ->
print i >> threadDelay 1000000
And then, of course, if some particular use of applyIf was a common pattern in your application, you could abstract over it:
-- Runs its argument on a separate thread if the application is configured to
-- run on more than one thread.
possiblyThreaded action = do
multithreaded <- (> 1) . numberOfThreads <$> getConfig
applyIf multithreaded (void . forkIO) action
example2'' = possiblyThreaded . forM_ [1..10] $ \i ->
print i >> threadDelay 1000000
As mentioned above, Haskell is certainly not alone in being able to express this idea. For instance, here's a translation into Ruby, with the caveat that my Ruby is very rusty, so this is likely to be unidiomatic. (I welcome suggestions on how to improve it.)
def apply_if(use_function, f, &block)
use_function ? f.call(&block) : yield
end
def example1a
do_when = lambda { |&block| if some_filter then block.call() end }
apply_if(apply_filter, do_when) { puts "Hello, world!" }
end
def example2a
apply_if(run_on_thread, Thread.method(:new)) do
(1..10).each { |i| puts i; sleep 1 }
end
end
def possibly_threaded(&block)
apply_if(app_config.number_of_threads > 1, Thread.method(:new), &block)
end
def example2b
possibly_threaded do
(1..10).each { |i| puts i; sleep 1 }
end
end
The point is the same—we wrap up the maybe-do-this-thing logic in its own function, and then apply that to the relevant block of code.
Note that this function is actually more general than just working on code blocks (as the Haskell type signature expresses); you can also, for instance, write abs n = applyIf (n < 0) negate n to implement the absolute value function. The key is to realize that code blocks themselves can be abstracted over, so things like if statements and for loops can just be functions. And we already know how to compose functions!
Also, all of the code above compiles and/or runs, but you'll need some imports and definitions. For the Haskell examples, you'll need the impots
import Control.Applicative -- for (<$>)
import Control.Monad -- for when, void, and forM_
import Control.Concurrent -- for forkIO and threadDelay
along with some bogus definitions of applyFilter, someFilter, body, runOnThread, numberOfThreads, and getConfig:
applyFilter = False
someFilter = False
body = putStrLn "Hello, world!"
runOnThread = True
getConfig = return 4 :: IO Int
numberOfThreads = id
For the Ruby examples, you'll need no imports and the following analogous bogus definitions:
def apply_filter; false; end
def some_filter; false; end
def run_on_thread; true; end
class AppConfig
attr_accessor :number_of_threads
def initialize(n)
#number_of_threads = n
end
end
def app_config; AppConfig.new(4); end
Common Lisp does not let you redefine if. You can, however, invent your own control structure as a macro in Lisp and use that instead.
Related
In Ruby I can group together some lines of code like so with a begin block:
x = begin
puts "Hi!"
a = 2
b = 3
a + b
end
puts x # 5
it's immediately evaluated and its value is the last value of the block (a + b here) (Javascripters do a similar thing with IIFEs)
What are the ways to do this in Raku? Is there anything smoother than:
my $x = ({
say "Hi!";
my $a = 2;
my $b = 3;
$a + $b;
})();
say $x; # 5
Insert a do in front of the block. This tells Raku to:
Immediately do whatever follows the do on its right hand side;
Return the value to the do's left hand side:
my $x = do {
put "Hi!";
my $a = 2;
my $b = 3;
$a + $b;
}
That said, one rarely needs to use do.
Instead, there are many other IIFE forms in Raku that just work naturally without fuss. I'll mention just two because they're used extensively in Raku code:
with whatever { .foo } else { .bar }
You might think I'm being silly, but those are two IIFEs. They form lexical scopes, have parameter lists, bind from arguments, the works. Loads of Raku constructs work like that.
In the above case where I haven't written an explicit parameter list, this isn't obvious. The fact that .foo is called on whatever if whatever is defined, and .bar is called on it if it isn't, is both implicit and due to the particular IIFE calling behavior of with.
See also if, while, given, and many, many more.
What's going on becomes more obvious if you introduce an explicit parameter list with ->:
for whatever -> $a, $b { say $a + $b }
That iterates whatever, binding two consecutive elements from it to $a and $b, until whatever is empty. If it has an odd number of elements, one might write:
for whatever -> $a, $b? { say $a + $b }
And so on.
Bottom line: a huge number of occurrences of {...} in Raku are IIFEs, even if they don't look like it. But if they're immediately after an =, Raku defaults to assuming you want to assign the lambda rather than immediately executing it, so you need to insert a do in that particular case.
Welcome to Raku!
my $x = BEGIN {
say "Hi!";
my $a = 2;
my $b = 3;
$a + $b;
}
I guess the common ancestry of Raku and Ruby shows :-)
Also note that to create a constant, you can also use constant:
my constant $x = do {
say "Hi!";
my $a = 2;
my $b = 3;
$a + $b;
}
If you can have a single statement, you can leave off the braces:
my $x = BEGIN 2 + 3;
or:
my constant $x = 2 + 3;
Regarding blocks: if they are in sink context (similar to "void" context in some languages), then they will execute just like that:
{
say "Executing block";
}
No need to explicitely call it: it will be called for you :-)
Identifier terms are defined in the documentation alongside constants, with pretty much the same use case, although terms compute their value in run time while constants get it in compile time. Potentially, that could make terms use global variables, but that's action at a distance and ugly, so I guess that's not their use case.
OTOH, they could be simply routines with null signature:
sub term:<þor> { "Is mighty" }
sub Þor { "Is mighty" }
say þor, Þor;
But you can already define routines with null signature. You can sabe, however, the error when you write:
say Þor ~ Þor;
Which would produce a many positionals passed; expected 0 arguments but got 1, unlike the term. That seems however a bit farfetched and you can save the trouble by just adding () at the end.
Another possible use case is defying the rules of normal identifiers
sub term:<✔> { True }
say ✔; # True
Are there any other use cases I have missed?
Making zero-argument subs work as terms will break the possibility to post-declare subs, since finding a sub after having parsed usages of it would require re-parsing of earlier code (which the perl 6 language refuses to do, "one-pass parsing" and all that) if the sub takes no arguments.
Terms are useful in combination with the ternary operator:
$ perl6 -e 'sub a() { "foo" }; say 42 ?? a !! 666'
===SORRY!=== Error while compiling -e
Your !! was gobbled by the expression in the middle; please parenthesize
$ perl6 -e 'sub term:<a> { "foo" }; say 42 ?? a !! 666'
foo
Constants are basically terms. So of course they are grouped together.
constant foo = 12;
say foo;
constant term:<bar> = 36;
say bar;
There is a slight difference because term:<…> works by modifying the parser. So it takes precedence.
constant fubar = 38;
constant term:<fubar> = 45;
say fubar; # 45
The above will print 45 regardless of which constant definition comes first.
Since term:<…> takes precedence the only way to get at the other value is to use ::<fubar> to directly access the symbol table.
say ::<fubar>; # 38
say ::<term:<fubar>>; # 45
There are two main use-cases for term:<…>.
One is to get a subroutine to be parsed similarly to a constant or sigilless variable.
sub fubar () { 'fubar'.comb.roll }
# say( fubar( prefix:<~>( 4 ) ) );
say fubar ~ 4; # ERROR
sub term:<fubar> () { 'fubar'.comb.roll }
# say( infix:<~>( fubar, 4 ) );
say fubar ~ 4;
The other is to have a constant or sigiless variable be something other than an a normal identifier.
my \✔ = True; # ERROR: Malformed my
my \term:<✔> = True;
say ✔;
Of course both use-cases can be combined.
sub term:<✔> () { True }
Perl 5 allows subroutines to have an empty prototype (different than a signature) which will alter how it gets parsed. The main purpose of prototypes in Perl 5 is to alter how the code gets parsed.
use v5;
sub fubar () { ord [split('','fubar')]->[rand 5] }
# say( fubar() + 4 );
say fubar + 4; # infix +
use v5;
sub fubar { ord [split('','fubar')]->[rand 5] }
# say( fubar( +4 ) );
say fubar + 4; # prefix +
Perl 6 doesn't use signatures the way Perl 5 uses prototypes. The main way to alter how Perl 6 parses code is by using the namespace.
use v6;
sub fubar ( $_ ) { .comb.roll }
sub term:<fubar> () { 'fubar'.comb.roll }
say fubar( 'zoo' ); # `z` or `o` (`o` is twice as likely)
say fubar; # `f` or `u` or `b` or `a` or `r`
sub prefix:<✔> ( $_ ) { "checked $_" }
say ✔ 'under the bed'; # checked under the bed
Note that Perl 5 doesn't really have constants, they are just subroutines with an empty prototype.
use v5;
use constant foo => 12;
use v5;
sub foo () { 12 } # ditto
(This became less true after 5.16)
As far as I know all of the other uses of prototypes have been superseded by design decisions in Perl 6.
use v5;
sub foo (&$) { $_[0]->($_[1]) }
say foo { 100 + $_[0] } 5; # 105;
That block is seen as a sub lambda because of the prototype of the foo subroutine.
use v6;
# sub foo ( &f, $v ) { f $v }
sub foo { #_[0].( #_[1] ) }
say foo { 100 + #_[0] }, 5; # 105
In Perl 6 a block is seen as a lambda if a term is expected. So there is no need to alter the parser with a feature like a prototype.
You are asking for exactly one use of prototypes to be brought back even though there is already a feature that covers that use-case.
Doing so would be a special-case. Part of the design ethos of Perl 6 is to limit the number of special-cases.
Other versions of Perl had a wide variety of special-cases, and it isn't always easy to remember them all.
Don't get me wrong; the special-cases in Perl 5 are useful, but Perl 6 has for the most part made them general-cases.
The c++ code which i wanted to rewrite or convert is:
class numberClass
{
private:
int value;
public:
int read()
{
return value;
}
void load(int x)
{
value = x;
}
void increment()
{
value= value +1;
}
};
int main()
{
numberClass num;
num.load(5);
int x=num.read();
cout<<x<<endl;
num.increment();
x=num.read();
cout<<x;
}
I do not know how to make any entity(like variable in C++) that can hold value throughout the program in haskell.
Please help.
Thanks
Basically, you can't. Values are immutable, and Haskell has no variables in the sense of boxes where you store values, like C++ and similar. You can do something similar using IORefs (which are boxes you can store values in), but it's almost always a wrong design to use them.
Haskell is a very different programming language, it's not a good idea to try to translate code from a language like C, C++, Java or so to Haskell. One has to view the tasks from different angles and approach it in a different way.
That being said:
module Main (main) where
import Data.IORef
main :: IO ()
main = do
num <- newIORef 5 :: IO (IORef Int)
x <- readIORef num
print x
modifyIORef num (+1)
x <- readIORef num
print x
Well, assuming that it's the wrapping, not the mutability, you can easily have a type that only allows constructing constant values and incrementation:
module Incr (Incr, incr, fromIncr, toIncr) where
newtype Incr a = Incr a deriving (Read, Show)
fromIncr :: Incr a -> a
fromIncr (Incr x) = x
incr :: (Enum a) => Incr a -> Incr a
incr (Incr x) = Incr (succ x)
toIncr :: a -> Incr a
toIncr = Incr
As Daniel pointed out, mutability is out of the question, but another purpose of your class is encapsulation, which this module provides just like the C++ class. Of course to a Haskell programmer, this module might not seem very useful, but perhaps you have use cases in mind, where you want to statically prevent library users from using regular addition or multiplication.
A direct translation of your code to haskell is rather stupid but of course possible (as shown in Daniel's answer).
Usually when you are working with state in haskell you might be able to work with the State Monad. As long as you are executing inside the State Monad you can query and update your state. If you want to be able to do some IO in addition (as in your example), you need to stack your State Monad on top of IO.
Using this approach your code might look like this:
import Control.Monad.State
import Prelude hiding(read)
increment = modify (+1)
load = put
read = get
normal :: StateT Int IO ()
normal = do
load 5
x <- read
lift (print x)
increment
x <- read
lift (print x)
main = evalStateT normal 0
But here you don't have an explicit type for your numberClass. If you want this there is a nice library on hackage that you could use: data-lenses.
Using lenses the code might be a little closer to your C++ version:
{-# LANGUAGE TemplateHaskell #-}
import Control.Monad.State(StateT,evalStateT,lift)
import Prelude hiding(read)
import Data.Lens.Lazy((~=),access,(%=))
import Data.Lens.Template(makeLenses)
data Number = Number {
_value :: Int
} deriving (Show)
$( makeLenses [''Number] )
increment = value %= succ
load x = value ~= x
read = access value
withLens :: StateT Number IO ()
withLens = do
load 5
x <- read
lift $ print x
increment
x <- read
lift $ print x
main = evalStateT withLens (Number 0)
Still not exactly your code...but well, it's haskell and not yet another OO-language.
I'm studying dynamic/static scope with deep/shallow binding and running code manually to see how these different scopes/bindings actually work. I read the theory and googled some example exercises and the ones I found are very simple (like this one which was very helpful with dynamic scoping) But I'm having trouble understanding how static scope works.
Here I post an exercise I did to check if I got the right solution:
considering the following program written in pseudocode:
int u = 42;
int v = 69;
int w = 17;
proc add( z:int )
u := v + u + z
proc bar( fun:proc )
int u := w;
fun(v)
proc foo( x:int, w:int )
int v := x;
bar(add)
main
foo(u,13)
print(u)
end;
What is printed to screen
a) using static scope? answer=180
b) using dynamic scope and deep binding? answer=69 (sum for u = 126 but it's foo's local v, right?)
c) using dynamic scope and shallow binding? answer=69 (sum for u = 101 but it's foo's local v, right?)
PS: I'm trying to practice doing some exercises like this if you know where I can find these types of problems (preferable with solutions) please give the link, thanks!
Your answer for lexical (static) scope is correct. Your answers for dynamic scope are wrong, but if I'm reading your explanations right, it's because you got confused between u and v, rather than because of any real misunderstanding about how deep and shallow binding work. (I'm assuming that your u/v confusion was just accidental, and not due to a strange confusion about values vs. references in the call to foo.)
a) using static scope? answer=180
Correct.
b) using dynamic scope and deep binding? answer=69 (sum for u = 126 but it's foo's local v, right?)
Your parenthetical explanation is right, but your answer is wrong: u is indeed set to 126, and foo indeed localizes v, but since main prints u, not v, the answer is 126.
c) using dynamic scope and shallow binding? answer=69 (sum for u = 101 but it's foo's local v, right?)
The sum for u is actually 97 (42+13+42), but since bar localizes u, the answer is 42. (Your parenthetical explanation is wrong for this one — you seem to have used the global variable w, which is 17, in interpreting the statement int u := w in the definition of bar; but that statement actually refers to foo's local variable w, its second parameter, which is 13. But that doesn't actually affect the answer. Your answer is wrong for this one only because main prints u, not v.)
For lexical scope, it's pretty easy to check your answers by translating the pseudo-code into a language with lexical scope. Likewise dynamic scope with shallow binding. (In fact, if you use Perl, you can test both ways almost at once, since it supports both; just use my for lexical scope, then do a find-and-replace to change it to local for dynamic scope. But even if you use, say, JavaScript for lexical scope and Bash for dynamic scope, it should be quick to test both.)
Dynamic scope with deep binding is much trickier, since few widely-deployed languages support it. If you use Perl, you can implement it manually by using a hash (an associative array) that maps from variable-names to scalar-refs, and passing this hash from function to function. Everywhere that the pseudocode declares a local variable, you save the existing scalar-reference in a Perl lexical variable, then put the new mapping in the hash; and at the end of the function, you restore the original scalar-reference. To support the binding, you create a wrapper function that creates a copy of the hash, and passes that to its wrapped function. Here is a dynamically-scoped, deeply-binding implementation of your program in Perl, using that approach:
#!/usr/bin/perl -w
use warnings;
use strict;
# Create a new scalar, initialize it to the specified value,
# and return a reference to it:
sub new_scalar($)
{ return \(shift); }
# Bind the specified procedure to the specified environment:
sub bind_proc(\%$)
{
my $V = { %{+shift} };
my $f = shift;
return sub { $f->($V, #_); };
}
my $V = {};
$V->{u} = new_scalar 42; # int u := 42
$V->{v} = new_scalar 69; # int v := 69
$V->{w} = new_scalar 17; # int w := 17
sub add(\%$)
{
my $V = shift;
my $z = $V->{z}; # save existing z
$V->{z} = new_scalar shift; # create & initialize new z
${$V->{u}} = ${$V->{v}} + ${$V->{u}} + ${$V->{z}};
$V->{z} = $z; # restore old z
}
sub bar(\%$)
{
my $V = shift;
my $fun = shift;
my $u = $V->{u}; # save existing u
$V->{u} = new_scalar ${$V->{w}}; # create & initialize new u
$fun->(${$V->{v}});
$V->{u} = $u; # restore old u
}
sub foo(\%$$)
{
my $V = shift;
my $x = $V->{x}; # save existing x
$V->{x} = new_scalar shift; # create & initialize new x
my $w = $V->{w}; # save existing w
$V->{w} = new_scalar shift; # create & initialize new w
my $v = $V->{v}; # save existing v
$V->{v} = new_scalar ${$V->{x}}; # create & initialize new v
bar %$V, bind_proc %$V, \&add;
$V->{v} = $v; # restore old v
$V->{w} = $w; # restore old w
$V->{x} = $x; # restore old x
}
foo %$V, ${$V->{u}}, 13;
print "${$V->{u}}\n";
__END__
and indeed it prints 126. It's obviously messy and error-prone, but it also really helps you understand what's going on, so for educational purposes I think it's worth it!
Simple and deep binding are Lisp interpreter viewpoints of the pseudocode. Scoping is just pointer arithmetic. Dynamic scope and static scope are the same if there are no free variables.
Static scope relies on a pointer to memory. Empty environments hold no symbol to value associations; denoted by word "End." Each time the interpreter reads an assignment, it makes space for association between a symbol and value.
The environment pointer is updated to point to the last association constructed.
env = End
env = [u,42] -> End
env = [v,69] -> [u,42] -> End
env = [w,17] -> [v,69] -> [u,42] -> End
Let me record this environment memory location as AAA. In my Lisp interpreter, when meeting a procedure, we take the environment pointer and put it our pocket.
env = [add,[closure,(lambda(z)(setq u (+ v u z)),*AAA*]]->[w,17]->[v,69]->[u,42]->End.
That's pretty much all there is until the procedure add is called. Interestingly, if add is never called, you just cost yourself a pointer.
Suppose the program calls add(8). OK, let's roll. The environment AAA is made current. Environment is ->[w,17]->[v,69]->[u,42]->End.
Procedure parameters of add are added to the front of the environment. The environment becomes [z,8]->[w,17]->[v,69]->[u,42]->End.
Now the procedure body of add is executed. Free variable v will have value 69. Free variable u will have value 42. z will have the value 8.
u := v + u + z
u will be assigned the value of 69 + 42 + 8 becomeing 119.
The environment will reflect this: [z,8]->[w,17]->[v,69]->[u,119]->End.
Assume procedure add has completed its task. Now the environment gets restored to its previous value.
env = [add,[closure,(lambda(z)(setq u (+ v u z)),*AAA*]]->[w,17]->[v,69]->[u,119]->End.
Notice how the procedure add has had a side effect of changing the value of free variable u. Awesome!
Regarding dynamic scoping: it just ensures closure leaves out dynamic symbols, thereby avoiding being captured and becoming dynamic.
Then put assignment to dynamic at top of code. If dynamic is same as parameter name, it gets masked by parameter value passed in.
Suppose I had a dynamic variable called z. When I called add(8), z would have been set to 8 regardless of what I wanted. That's probably why dynamic variables have longer names.
Rumour has it that dynamic variables are useful for things like backtracking, using let Lisp constructs.
Static binding, also known as lexical scope, refers to the scoping mechanism found in most modern languages.
In "lexical scope", the final value for u is neither 180 or 119, which are wrong answers.
The correct answer is u=101.
Please see standard Perl code below to understand why.
use strict;
use warnings;
my $u = 42;
my $v = 69;
my $w = 17;
sub add {
my $z = shift;
$u = $v + $u + $z;
}
sub bar {
my $fun = shift;
$u = $w;
$fun->($v);
}
sub foo {
my ($x, $w) = #_;
$v = $x;
bar( \&add );
}
foo($u,13);
print "u: $u\n";
Regarding shallow binding versus deep binding, both mechanisms date from the former LISP era.
Both mechanisms are meant to achieve dynamic binding (versus lexical scope binding) and therefore they produce identical results !
The differences between shallow binding and deep binding do not reside in semantics, which are identical, but in the implementation of dynamic binding.
With deep binding, variable bindings are set within a stack as "varname => varvalue" pairs.
The value of a given variable is retrieved from traversing the stack from top to bottom until a binding for the given variable is found.
Updating the variable consists in finding the binding in the stack and updating the associated value.
On entering a subroutine, a new binding for each actual parameter is pushed onto the stack, potentially hiding an older binding which is therefore no longer accessible wrt the retrieving mechanism described above (that stops at the 1st retrieved binding).
On leaving the subroutine, bindings for these parameters are simply popped from the binding stack, thus re-enabling access to the former bindings.
Please see the the code below for a Perl implementation of deep-binding dynamic scope.
use strict;
use warnings;
use utf8;
##
# Dynamic-scope deep-binding implementation
my #stack = ();
sub bindv {
my ($varname, $varval);
unshift #stack, [ $varname => $varval ]
while ($varname, $varval) = splice #_, 0, 2;
return $varval;
}
sub unbindv {
my $n = shift || 1;
shift #stack while $n-- > 0;
}
sub getv {
my $varname = shift;
for (my $i=0; $i < #stack; $i++) {
return $stack[$i][1]
if $varname eq $stack[$i][0];
}
return undef;
}
sub setv {
my ($varname, $varval) = #_;
for (my $i=0; $i < #stack; $i++) {
return $stack[$i][1] = $varval
if $varname eq $stack[$i][0];
}
return bindv($varname, $varval);
}
##
# EXERCICE
bindv( u => 42,
v => 69,
w => 17,
);
sub add {
bindv(z => shift);
setv(u => getv('v')
+ getv('u')
+ getv('z')
);
unbindv();
}
sub bar {
bindv(fun => shift);
setv(u => getv('w'));
getv('fun')->(getv('v'));
unbindv();
}
sub foo {
bindv(x => shift,
w => shift,
);
setv(v => getv('x'));
bar( \&add );
unbindv(2);
}
foo( getv('u'), 13);
print "u: ", getv('u'), "\n";
The result is u=97
Nevertheless, this constant traversal of the binding stack is costly : 0(n) complexity !
Shallow binding brings a wonderful O(1) enhanced performance over the previous implementation !
Shallow binding is improving the former mechanism by assigning each variable its own "cell", storing the value of the variable within the cell.
The value of a given variable is simply retrieved from the variable's
cell (using a hash table on variable names, we achieve a
0(1) complexity for accessing variable's values!)
Updating the variable's value is simply storing the value into the
variable's cell.
Creating a new binding (entering subs) works by pushing the old value
of the variable (a previous binding) onto the stack, and storing the
new local value in the value cell.
Eliminating a binding (leaving subs) works by popping the old value
off the stack into the variable's value cell.
Please see the the code below for a trivial Perl implementation of shallow-binding dynamic scope.
use strict;
use warnings;
our $u = 42;
our $v = 69;
our $w = 17;
our $z;
our $fun;
our $x;
sub add {
local $z = shift;
$u = $v + $u + $z;
}
sub bar {
local $fun = shift;
$u = $w;
$fun->($v);
}
sub foo {
local $x = shift;
local $w = shift;
$v = $x;
bar( \&add );
}
foo($u,13);
print "u: $u\n";
As you shall see, the result is still u=97
As a conclusion, remember two things :
shallow binding produces the same results as deep binding, but runs faster, since there is never a need to search for a binding.
The problem is not shallow binding versus deep binding versus
static binding BUT lexical scope versus dynamic scope (implemented either with deep or shallow binding).
I'm writing a function to find triangle numbers and the natural way to write it is recursively:
function triangle (x)
if x == 0 then return 0 end
return x+triangle(x-1)
end
But attempting to calculate the first 100,000 triangle numbers fails with a stack overflow after a while. This is an ideal function to memoize, but I want a solution that will memoize any function I pass to it.
Mathematica has a particularly slick way to do memoization, relying on the fact that hashes and function calls use the same syntax:
triangle[0] = 0;
triangle[x_] := triangle[x] = x + triangle[x-1]
That's it. It works because the rules for pattern-matching function calls are such that it always uses a more specific definition before a more general definition.
Of course, as has been pointed out, this example has a closed-form solution: triangle[x_] := x*(x+1)/2. Fibonacci numbers are the classic example of how adding memoization gives a drastic speedup:
fib[0] = 1;
fib[1] = 1;
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
Although that too has a closed-form equivalent, albeit messier: http://mathworld.wolfram.com/FibonacciNumber.html
I disagree with the person who suggested this was inappropriate for memoization because you could "just use a loop". The point of memoization is that any repeat function calls are O(1) time. That's a lot better than O(n). In fact, you could even concoct a scenario where the memoized implementation has better performance than the closed-form implementation!
You're also asking the wrong question for your original problem ;)
This is a better way for that case:
triangle(n) = n * (n - 1) / 2
Furthermore, supposing the formula didn't have such a neat solution, memoisation would still be a poor approach here. You'd be better off just writing a simple loop in this case. See this answer for a fuller discussion.
I bet something like this should work with variable argument lists in Lua:
local function varg_tostring(...)
local s = select(1, ...)
for n = 2, select('#', ...) do
s = s..","..select(n,...)
end
return s
end
local function memoize(f)
local cache = {}
return function (...)
local al = varg_tostring(...)
if cache[al] then
return cache[al]
else
local y = f(...)
cache[al] = y
return y
end
end
end
You could probably also do something clever with a metatables with __tostring so that the argument list could just be converted with a tostring(). Oh the possibilities.
In C# 3.0 - for recursive functions, you can do something like:
public static class Helpers
{
public static Func<A, R> Memoize<A, R>(this Func<A, Func<A,R>, R> f)
{
var map = new Dictionary<A, R>();
Func<A, R> self = null;
self = (a) =>
{
R value;
if (map.TryGetValue(a, out value))
return value;
value = f(a, self);
map.Add(a, value);
return value;
};
return self;
}
}
Then you can create a memoized Fibonacci function like this:
var memoized_fib = Helpers.Memoize<int, int>((n,fib) => n > 1 ? fib(n - 1) + fib(n - 2) : n);
Console.WriteLine(memoized_fib(40));
In Scala (untested):
def memoize[A, B](f: (A)=>B) = {
var cache = Map[A, B]()
{ x: A =>
if (cache contains x) cache(x) else {
val back = f(x)
cache += (x -> back)
back
}
}
}
Note that this only works for functions of arity 1, but with currying you could make it work. The more subtle problem is that memoize(f) != memoize(f) for any function f. One very sneaky way to fix this would be something like the following:
val correctMem = memoize(memoize _)
I don't think that this will compile, but it does illustrate the idea.
Update: Commenters have pointed out that memoization is a good way to optimize recursion. Admittedly, I hadn't considered this before, since I generally work in a language (C#) where generalized memoization isn't so trivial to build. Take the post below with that grain of salt in mind.
I think Luke likely has the most appropriate solution to this problem, but memoization is not generally the solution to any issue of stack overflow.
Stack overflow usually is caused by recursion going deeper than the platform can handle. Languages sometimes support "tail recursion", which re-uses the context of the current call, rather than creating a new context for the recursive call. But a lot of mainstream languages/platforms don't support this. C# has no inherent support for tail-recursion, for example. The 64-bit version of the .NET JITter can apply it as an optimization at the IL level, which is all but useless if you need to support 32-bit platforms.
If your language doesn't support tail recursion, your best option for avoiding stack overflows is either to convert to an explicit loop (much less elegant, but sometimes necessary), or find a non-iterative algorithm such as Luke provided for this problem.
function memoize (f)
local cache = {}
return function (x)
if cache[x] then
return cache[x]
else
local y = f(x)
cache[x] = y
return y
end
end
end
triangle = memoize(triangle);
Note that to avoid a stack overflow, triangle would still need to be seeded.
Here's something that works without converting the arguments to strings.
The only caveat is that it can't handle a nil argument. But the accepted solution can't distinguish the value nil from the string "nil", so that's probably OK.
local function m(f)
local t = { }
local function mf(x, ...) -- memoized f
assert(x ~= nil, 'nil passed to memoized function')
if select('#', ...) > 0 then
t[x] = t[x] or m(function(...) return f(x, ...) end)
return t[x](...)
else
t[x] = t[x] or f(x)
assert(t[x] ~= nil, 'memoized function returns nil')
return t[x]
end
end
return mf
end
I've been inspired by this question to implement (yet another) flexible memoize function in Lua.
https://github.com/kikito/memoize.lua
Main advantages:
Accepts a variable number of arguments
Doesn't use tostring; instead, it organizes the cache in a tree structure, using the parameters to traverse it.
Works just fine with functions that return multiple values.
Pasting the code here as reference:
local globalCache = {}
local function getFromCache(cache, args)
local node = cache
for i=1, #args do
if not node.children then return {} end
node = node.children[args[i]]
if not node then return {} end
end
return node.results
end
local function insertInCache(cache, args, results)
local arg
local node = cache
for i=1, #args do
arg = args[i]
node.children = node.children or {}
node.children[arg] = node.children[arg] or {}
node = node.children[arg]
end
node.results = results
end
-- public function
local function memoize(f)
globalCache[f] = { results = {} }
return function (...)
local results = getFromCache( globalCache[f], {...} )
if #results == 0 then
results = { f(...) }
insertInCache(globalCache[f], {...}, results)
end
return unpack(results)
end
end
return memoize
Here is a generic C# 3.0 implementation, if it could help :
public static class Memoization
{
public static Func<T, TResult> Memoize<T, TResult>(this Func<T, TResult> function)
{
var cache = new Dictionary<T, TResult>();
var nullCache = default(TResult);
var isNullCacheSet = false;
return parameter =>
{
TResult value;
if (parameter == null && isNullCacheSet)
{
return nullCache;
}
if (parameter == null)
{
nullCache = function(parameter);
isNullCacheSet = true;
return nullCache;
}
if (cache.TryGetValue(parameter, out value))
{
return value;
}
value = function(parameter);
cache.Add(parameter, value);
return value;
};
}
}
(Quoted from a french blog article)
In the vein of posting memoization in different languages, i'd like to respond to #onebyone.livejournal.com with a non-language-changing C++ example.
First, a memoizer for single arg functions:
template <class Result, class Arg, class ResultStore = std::map<Arg, Result> >
class memoizer1{
public:
template <class F>
const Result& operator()(F f, const Arg& a){
typename ResultStore::const_iterator it = memo_.find(a);
if(it == memo_.end()) {
it = memo_.insert(make_pair(a, f(a))).first;
}
return it->second;
}
private:
ResultStore memo_;
};
Just create an instance of the memoizer, feed it your function and argument. Just make sure not to share the same memo between two different functions (but you can share it between different implementations of the same function).
Next, a driver functon, and an implementation. only the driver function need be public
int fib(int); // driver
int fib_(int); // implementation
Implemented:
int fib_(int n){
++total_ops;
if(n == 0 || n == 1)
return 1;
else
return fib(n-1) + fib(n-2);
}
And the driver, to memoize
int fib(int n) {
static memoizer1<int,int> memo;
return memo(fib_, n);
}
Permalink showing output on codepad.org. Number of calls is measured to verify correctness. (insert unit test here...)
This only memoizes one input functions. Generalizing for multiple args or varying arguments left as an exercise for the reader.
In Perl generic memoization is easy to get. The Memoize module is part of the perl core and is highly reliable, flexible, and easy-to-use.
The example from it's manpage:
# This is the documentation for Memoize 1.01
use Memoize;
memoize('slow_function');
slow_function(arguments); # Is faster than it was before
You can add, remove, and customize memoization of functions at run time! You can provide callbacks for custom memento computation.
Memoize.pm even has facilities for making the memento cache persistent, so it does not need to be re-filled on each invocation of your program!
Here's the documentation: http://perldoc.perl.org/5.8.8/Memoize.html
Extending the idea, it's also possible to memoize functions with two input parameters:
function memoize2 (f)
local cache = {}
return function (x, y)
if cache[x..','..y] then
return cache[x..','..y]
else
local z = f(x,y)
cache[x..','..y] = z
return z
end
end
end
Notice that parameter order matters in the caching algorithm, so if parameter order doesn't matter in the functions to be memoized the odds of getting a cache hit would be increased by sorting the parameters before checking the cache.
But it's important to note that some functions can't be profitably memoized. I wrote memoize2 to see if the recursive Euclidean algorithm for finding the greatest common divisor could be sped up.
function gcd (a, b)
if b == 0 then return a end
return gcd(b, a%b)
end
As it turns out, gcd doesn't respond well to memoization. The calculation it does is far less expensive than the caching algorithm. Ever for large numbers, it terminates fairly quickly. After a while, the cache grows very large. This algorithm is probably as fast as it can be.
Recursion isn't necessary. The nth triangle number is n(n-1)/2, so...
public int triangle(final int n){
return n * (n - 1) / 2;
}
Please don't recurse this. Either use the x*(x+1)/2 formula or simply iterate the values and memoize as you go.
int[] memo = new int[n+1];
int sum = 0;
for(int i = 0; i <= n; ++i)
{
sum+=i;
memo[i] = sum;
}
return memo[n];