String comparison in the core language - optimization

Taking this simple comparison loopValue == "Firstname", is the following statement true?
If the internal operand inspecting the first char does not match the compared string, it will early abort
So taking the rawer form loopValue and "Firstname" are both []byte. And it would walk the array kind of like so as callback loop for truth:
someInspectionFunc(loopValue, "Firstname", func(charA, charB) {
return charA == charB
})
... making it keep on going until it bumps false and checks if the number of iterations was equal to both their lengths. Also does it check length first?
if len(loopValue) != len("Firstname") {
return false
}
I can't really find an explanation in the go source-code on GitHub as it's a bit above me.
The reason I'm asking this is because I'm doing big data processing and am benchmarking and doing cpu, memory and allocation pprof to squeeze some more juice out of the process. From that process it kind of made me think how Go (but also just C in general) would do this under the hood. Is this fully on an assembly level or does the comparison already happen in native Go code (kind of like sketched in the snippets above)?
Please let me know if I'm being too vague or if I missed something. Thank you
Update
When I did a firstCharater match in big strings of json, before really comparing I got about 3.7% benchmarking gain on 100k heavy entries:
<some irrelevant inspection code>.. v[0] == firstChar && v == lookFor {
// Match found when it reaches here
}
the code above (especially on long strings) is faster than just going for v == lookFor.

The function is handled in assembly. The amd64 version is:
TEXT runtime·eqstring(SB),NOSPLIT,$0-33
MOVQ s1str+0(FP), SI
MOVQ s2str+16(FP), DI
CMPQ SI, DI
JEQ eq
MOVQ s1len+8(FP), BX
LEAQ v+32(FP), AX
JMP runtime·memeqbody(SB)
eq:
MOVB $1, v+32(FP)
RET
And it's the compiler's job to ensure that the strings are of equal length before that is called. (The runtime·memeqbody function is actually where the optimized memory comparisons happen, but there's probably no need to post the full text here)
The equivalent Go code would be:
func eqstring_generic(s1, s2 string) bool {
if len(s1) != len(s2) {
return false
}
for i := 0; i < len(s1); i++ {
if s1[i] != s2[i] {
return false
}
}
return true
}

Related

Variable getting overwritten in for loop

In a for loop, a different variable is assigned a value. The variable which has already been assigned a value is getting assigned the value from next iteration. At the end, both variable have the same value.
The code is for validating data in a file. When I print the values, it prints correct value for first iteration but in the next iteration, the value assigned in first iteration is changed.
When I print the value of $value3 and $value4 in the for loop, it shows null for $value4 and some value for $value3 but in the next iteration, the value of $value3 is overwritten by the value of $value4
I have tried on rakudo perl 6.c
my $fh= $!FileName.IO.open;
my $fileObject = FileValidation.new( file => $fh );
for (3,4).list {
put "Iteration: ", $_;
if ($_ == 4) {
$value4 := $fileObject.FileValidationFunction(%.ValidationRules{4}<ValidationFunction>, %.ValidationRules{4}<Arguments>);
}
if ($_ == 3) {
$value3 := $fileObject.FileValidationFunction(%.ValidationRules{3}<ValidationFunction>, %.ValidationRules{3}<Arguments>);
}
$fh.seek: SeekFromBeginning;
}
TL;DR It's not possible to confidently answer your question as it stands. This is a nanswer -- an answer in the sense I'm writing it as one but also quite possibly not an answer in the sense of helping you fix your problem.
Is it is rw? A first look.
The is rw trait on a routine or class attribute means it returns a container that contains a value rather than just returning a value.
If you then alias that container then you can get the behavior you've described.
For example:
my $foo;
sub bar is rw { $foo = rand }
my ($value3, $value4);
$value3 := bar;
.say for $value3, $value4;
$value4 := bar;
.say for $value3, $value4;
displays:
0.14168492246366005
(Any)
0.31843665763839857
0.31843665763839857
This isn't a bug in the language or compiler. It's just P6 code doing what it's supposed to do.
A longer version of the same thing
Perhaps the above is so far from your code it's disorienting. So here's the same thing wrapped in something like the code you provided.
spurt 'junk', 'junk';
class FileValidation {
has $.file;
has $!foo;
method FileValidationFunction ($,$) is rw { $!foo = rand }
}
class bar {
has $!FileName = 'junk';
has %.ValidationRules =
{ 3 => { ValidationFunction => {;}, Arguments => () },
4 => { ValidationFunction => {;}, Arguments => () } }
my ($value3, $value4);
method baz {
my $fh= $!FileName.IO.open;
my $fileObject = FileValidation.new( file => $fh );
my ($value3, $value4);
for (3,4).list {
put "Iteration: ", $_;
if ($_ == 4) {
$value4 := $fileObject.FileValidationFunction(
%.ValidationRules{4}<ValidationFunction>, %.ValidationRules{4}<Arguments>);
}
if ($_ == 3) {
$value3 := $fileObject.FileValidationFunction(
%.ValidationRules{3}<ValidationFunction>, %.ValidationRules{3}<Arguments>);
}
$fh.seek: SeekFromBeginning;
.say for $value3, $value4
}
}
}
bar.new.baz
This outputs:
Iteration: 3
0.5779679442816953
(Any)
Iteration: 4
0.8650280000277686
0.8650280000277686
Is it is rw? A second look.
Brad and I came up with essentially the same answer (at the same time; I was a minute ahead of Brad but who's counting? I mean besides me? :)) but Brad nicely nails the fix:
One way to avoid aliasing a container is to just use =.
(This is no doubt also why #ElizabethMattijsen++ asked about trying = instead of :=.)
You've commented that changing from := to = made no difference.
But presumably you didn't change from := to = throughout your entire codebase but rather just (the equivalent of) the two in the code you've shared.
So perhaps the problem can still be fixed by switching from := to =, but in some of your code elsewhere. (That said, don't just globally replace := with =. Instead, make sure you understand their difference and then change them as appropriate. You've got a test suite, right? ;))
How to move forward if you're still stuck
Right now your question has received several upvotes and no downvotes and you've got two answers (that point to the same problem).
But maybe our answers aren't good enough.
If so...
The addition of the reddit comment, and trying = instead of :=, and trying the latest compiler, and commenting on those things, leaves me glad I didn't downvote your question, but I haven't upvoted it yet and there's a reason for that. It's because your question is still missing a Minimal Reproducible Example.
You responded to my suggestion about producing an MRE with:
The problem is that I am not able to replicate this in a simpler environment
I presumed that's your situation, but as you can imagine, that means we can't confidently replicate it at all. That may be the way you prefer to go for reasons but it goes against SO guidance (in the link above) and if the current answers aren't adequate then the sensible way forward is for you to do what it takes to share code that reproduces your problem.
If it's large, please don't just paste it into your question but instead link to it. Perhaps you can set it up on glot.io using the + button to use multiple files (up to 6 I think, plus there's a standard input too). If not, perhaps gist it via, say, gist.github.com, and if I can I'll set it up on glot.io for you.
What is probably happening is that you are returning a container rather than a value, then aliasing the container to a variable.
class Foo {
has $.a is rw;
}
my $o = Foo.new( a => 1 );
my $old := $o.a;
say $old; # 1
$o.a = 2;
say $old; # 2
One way to avoid aliasing a container is to just use =.
my $old = $o.a;
say $old; # 1
$o.a = 2;
say $old; # 1
You could also decontainerize the value using either .self or .<>
my $old := $o.a.<>;
say $old; # 1
$o.a = 2;
say $old; # 1
(Note that .<> above could be .self or just <>.)

Counter as variable in for-in-loops

When normally using a for-in-loop, the counter (in this case number) is a constant in each iteration:
for number in 1...10 {
// do something
}
This means I cannot change number in the loop:
for number in 1...10 {
if number == 5 {
++number
}
}
// doesn't compile, since the prefix operator '++' can't be performed on the constant 'number'
Is there a way to declare number as a variable, without declaring it before the loop, or using a normal for-loop (with initialization, condition and increment)?
To understand why i can’t be mutable involves knowing what for…in is shorthand for. for i in 0..<10 is expanded by the compiler to the following:
var g = (0..<10).generate()
while let i = g.next() {
// use i
}
Every time around the loop, i is a freshly declared variable, the value of unwrapping the next result from calling next on the generator.
Now, that while can be written like this:
while var i = g.next() {
// here you _can_ increment i:
if i == 5 { ++i }
}
but of course, it wouldn’t help – g.next() is still going to generate a 5 next time around the loop. The increment in the body was pointless.
Presumably for this reason, for…in doesn’t support the same var syntax for declaring it’s loop counter – it would be very confusing if you didn’t realize how it worked.
(unlike with where, where you can see what is going on – the var functionality is occasionally useful, similarly to how func f(var i) can be).
If what you want is to skip certain iterations of the loop, your better bet (without resorting to C-style for or while) is to use a generator that skips the relevant values:
// iterate over every other integer
for i in 0.stride(to: 10, by: 2) { print(i) }
// skip a specific number
for i in (0..<10).filter({ $0 != 5 }) { print(i) }
let a = ["one","two","three","four"]
// ok so this one’s a bit convoluted...
let everyOther = a.enumerate().filter { $0.0 % 2 == 0 }.map { $0.1 }.lazy
for s in everyOther {
print(s)
}
The answer is "no", and that's a good thing. Otherwise, a grossly confusing behavior like this would be possible:
for number in 1...10 {
if number == 5 {
// This does not work
number = 5000
}
println(number)
}
Imagine the confusion of someone looking at the number 5000 in the output of a loop that is supposedly bound to a range of 1 though 10, inclusive.
Moreover, what would Swift pick as the next value of 5000? Should it stop? Should it continue to the next number in the range before the assignment? Should it throw an exception on out-of-range assignment? All three choices have some validity to them, so there is no clear winner.
To avoid situations like that, Swift designers made loop variables in range loops immutable.
Update Swift 5
for var i in 0...10 {
print(i)
i+=1
}

Realloc not expanding my array

I'm having trouble implementing realloc in a very basic way.
I'm trying to expand the region of memory at **ret, which is pointing to an array of structs
with ret = realloc(ret, newsize); and based on my debug strings I know newsize is correctly increasing over the course of the loop (going from the original size of 4 to 8 to 12 etc.), but when I do sizeof(ptr) it's still returning the original size of 4, and the things I'm trying to place into the newly allocated space can't be found (I think I've narrowed it down to realloc() which is why I'm formatting the question like this)
I can post the function in it's entirety if the problem isn't immediately evident to you, I'm just trying to not "cheat" with my homework too much (the code is kind of messy right now anyway, with heavy use of printf() for debug).
[EDIT] Alright, so based on your answers I'm failing at debugging my code, so I guess I'll post the whole function so you can tell me more about what I'm doing wrong.
(You can ignore the printf()'s since most of that is debug that isn't even working)
Booking **bookingSelectPaid(Booking **booking) {
Booking **ret = malloc(sizeof(Booking*));
printf("Initial address of ret = %p\n", ret);
size_t i = 0;
int numOfPaid = 0;
while (booking[i] != NULL)
{
if (booking[i]->paid == 1)
{
printf("Paying customer! sizeof(Booking*) = %d\n", (int)sizeof(Booking*));
++numOfPaid;
size_t newsize = sizeof(Booking*) * (numOfPaid + 1);
printf("Newsize = %d\n", (int)newsize);
Booking **temp = realloc(NULL, (size_t)newsize);
if (temp != NULL)
printf("Expansion success! => %p sizeof(new pointer) = %d ret = %p\n", temp, (int)sizeof(temp), ret);
ret = realloc(ret, newsize);
ret[i] = booking[i];
ret[i+1] = NULL;
}
++i;
printf("Sizeof(ret) = %d numOfPaid = %d\n", (int)sizeof(ret), numOfPaid);
}
return ret; }
[EDIT2] --> http://pastebin.com/xjzUBmPg
[EDIT3] Just to be clear, the printf's, the temp pointer and things of that nature are debug, and not part of the intended functionality. The line that is puzzling me is either the one with realloc(ret, newsize); or ret[i] = booking[i]
Basically I know for sure that booking contains a table of structs that ends in NULL, and I'm trying to bring the ones that have a specific value set to 1 (paid) onto the new table, which is what my main() is trying to get from this function... So where am I going wrong?
I think the problem here is that your sizeof(ptr) only returns the size of the pointer, which will depend on your architecture (you say 4, so that would mean you're running a 32-bit system).
If you allocate memory dynamically, you have to keep track of its size yourself.
Because sizeof(ptr) returns the size of the pointer, not the allocated size
Yep, sizeof(ptr) is a constant. As the other answer says, depends on the architecture. On a 32 bit architecture it will be 4 and on a 64 bit architecture it will be 8. If you need more help with questions like that this homework help web site can be great for you.
Good luck.

Does "if ([bool] == true)" require one more step than "if ([bool])"?

This is a purely pedantic question, to sate my own curiosity.
I tend to go with the latter option in the question (so: if (boolCheck) { ... }), while a coworker always writes the former (if (boolCheck == true) { ... }). I always kind of teased him about it, and he always explained it as an old habit from when he was first starting programming.
But it just occurred to me today that actually writing out the whole == true part may in fact require an additional step for processing, since any expression with a == operator gets evaluated to a Boolean value. Is this true?
In other words, as I understand it, the option without the == true line could be loosely described as follows:
Check X
While the option with the == true line would be more like:
Let Y be true if X is true, otherwise false
Check Y
Am I correct? Or perhaps any normal compiler/interpreter will do away with this difference? Or am I overlooking something, and there's really no difference at all?
Obviously, there will be no difference in terms of actual observed performance. Like I said, I'm just curious.
EDIT: Thanks to everyone who actually posted compiled results to illustrate whether the steps were different between the two approaches. (It seems, most of the time, they were, albeit only slightly.)
I just want to reiterate that I was not asking about what is the "right" approach. I understand that many people favor one over the other. I also understand that, logically, the two are identical. I was just curious if the actual operations being performed by the CPU are exactly the same for both methods; as it turns out, much of the time (obviously it depends on language, compiler, etc.), they are not.
I think the comparison with true shows a lack of understanding by your partner. Bools should be named things to avoid this (such as isAvailable: if (isAvailable) {...}).
I would expect the difference to be optimised away by any half-decent compiler.
(I just checked with C# and the compiled code is exactly the same for both syntaxes.)
The compiler should generate the same code. However, comparing with true is arguably better because it is more explicit. Generally I don't do the explicit comparison, but you shouldn't make fun of him for doing it.
Edit: The easiest way to tell is to try. The MS compiler (cl.exe) generates the same number of steps in assembly:
int _tmain(int argc, _TCHAR* argv[])
{
bool test_me = true;
if (test_me) {
004113C2 movzx eax,byte ptr [test_me]
004113C6 test eax,eax
004113C8 je wmain+41h (4113E1h)
printf("test_me was true!");
}
if (test_me == true) {
004113E1 movzx eax,byte ptr [test_me]
004113E5 cmp eax,1
004113E8 jne wmain+61h (411401h)
printf("still true!");
}
return 0;
}
At this point the question is do test and cmp have the same cost? My guess is yes, though experts may be able to point out differences.
The practical upshot is you shouldn't worry about this. Chances are you have way bigger performance fish to fry.
Duplicate question (Should I use `!IsGood` or `IsGood == false`?). Here's a pointer to my previous answer:
The technique of testing specifically
against true or false is bad practice
if the variable in question is really
supposed to be used as a boolean value
(even if its type is not boolean) -
especially in C/C++. Testing against
true can (and probably will) lead to
subtle bugs.
See the following SO answer for details:
Should I use `!IsGood` or `IsGood == false`?
Here is a thread which details the reasoning behind why "== true" is often false in more explicit detail, including Stroustrup's explanation:
https://qt-project.org/forums/viewthread/32642
In my experience if (flag==true) is bad practice.
The first argument is academic:
If you have a bool flag, it is either true or false.
Now, the expression
(flag==true)
again, is true or false - it is no more expressive, only redundant - flag can't get "more true" or "more false" than it already is. It would be "clearer" only if it's not obvious flag is a boolean - but there's a standard way to fix that which works for all types: choose a better name.
Stretching this beyond reason, the following would be "even better":
((flag==true)==true)
The second argument is pragmatic and platform-specific
C and early C++ implementations had no real "bool" type, so there are different conventions for flags, the most common being anything nonzero is true. It is not uncommon for API's to return an integer-based BOOL type, but not enforce the return value to be 0 or 1.
Some environments use the following definitions:
#define FALSE 0
#define TRUE (!FALSE)
good luck with if ((1==1) == TRUE)
Further, some platforms use different values - e.g. the VARIANT_BOOL for VB interop is a short, and VARIANT_TRUE is -1.
When mixing libraries using these definitions, an explicit comparison to true can easily be an error disguised as good intentions. So, don't.
Here's the Python (2.6) disassembly:
>>> def check(x): return (bool(x) == True)
>>> import dis
>>> dis.dis(check)
1 0 LOAD_GLOBAL 0 (bool)
3 LOAD_FAST 0 (x)
6 CALL_FUNCTION 1
9 LOAD_GLOBAL 1 (True)
12 COMPARE_OP 2 (==)
15 RETURN_VALUE
>>> def check2(x): return bool(x)
>>> dis.dis(check2)
1 0 LOAD_GLOBAL 0 (bool)
3 LOAD_FAST 0 (x)
6 CALL_FUNCTION 1
9 RETURN_VALUE
I suppose the reason check isn't optimized is due to Python being a dynamic language. This in itself doesn't mean Python uses a poor compiler, but it could stand to do a little type inference here. Maybe it makes a difference when a .pyd file is created?
There may be some justification in the "boolCheck == true" syntax dependning on the name of the variable you're testing.
For example, if the variable name was "state", then you can either have:
if (state) {
...
}
or you can have
if (state == true) {
...
}
In this case, I think the latter form is clearer and more obvious.
Another example would be a variable like "enabled", in which case the first form is clearer.
Now having a boolean variable called "state" is bad practice, but you may have no control over that, in which case the "== true" syntax may improve code readability.
This gets into specific languages and what they consider "truthy" values. Here's two common ones: JavaScript and PHP.
The triple equals operator in both these languages assures that you really are checking for a boolean type of truth. For example, in PHP checking for if ($value) can be different from if ($value==true) or if ($value===true):
$f = true;
$z = 1;
$n = null;
$a = array(1,2);
print ($f) ?'true':'false'; // true
print ($f==true) ?'true':'false'; // true
print ($f===true) ?'true':'false'; // true
print "\n";
print ($z) ?'true':'false'; // true
print ($z==true) ?'true':'false'; // true
print ($z===true) ?'true':'false'; // false
print "\n";
print ($n) ?'true':'false'; // false
print ($n==true) ?'true':'false'; // false
print ($n===true) ?'true':'false'; // false
print "\n";
print ($a) ?'true':'false'; // true
print ($a==true) ?'true':'false'; // true
print ($a===true) ?'true':'false'; // false
print "\n";
print "\n";
I expect that languages one speaks daily will inform how one views this question.
The MSVC++ 6.0 compiler generates slightly different code for the two forms:
4: if ( ok ) {
00401033 mov eax,dword ptr [ebp-8]
00401036 and eax,0FFh
0040103B test eax,eax
0040103D je main+36h (00401046)
....
7: if ( ok == true ) {
00401046 mov ecx,dword ptr [ebp-8]
00401049 and ecx,0FFh
0040104F cmp ecx,1
00401052 jne main+4Bh (0040105b)
The former version should be very slightly faster, if I remember my 8086 timings correctly :-)
It depends on the compiler. There might be optimizations for either case.
Testing for equality to true can lead to a bug, at least in C: in C, 'if' can be used on integer types (not just on boolean types), and accept any non-zero value; however the symbol 'TRUE' is one specific non-zero value (e.g. 1). This can become important with bit-flags:
if (flags & DCDBIT)
{
//do something when the DCDBIT bit is set in the flags variable
}
So given a function like this ...
int isDcdSet()
{
return (flags & DCDBIT);
}
... the expression "if (isDcdSet())" is not the same as "if (isDcdSet() == TRUE)".
Anyway; I'd think that an optimizing compiler should optimize away any difference (because there is no logical difference), assuming it's a language with a true boolean (not just integer) type.
There is no logical difference, presuming there is no internal typecasting going on, e.g., testing fstream.
ifstream fin;
...
if( ! fin)
...
It may not be the same if you are working with a nullable bool.
Example:
Bool? myBool = true
if(myBool) // This will not compile
if(myBool == true) //this will compile
IMHO it's a bad idea to use the if (boolCheck == true) form, for a simple reason : if you accidentally type a single 'equals' sign instead of a double (i.e. if (boolCheck = true)), it will assign true to boolCheck and will always return true, which would obviously be a bug. Of course, most modern compilers will show a warning when they see that (at least the C# compiler does), but many developers just ignore warnings...
If you want to be explicit, you should prefer this form: if (true == boolCheck). This will avoid accidental assignation of the variable, and will cause a compile error if you forget an 'equals' sign.

How do I write a generic memoize function?

I'm writing a function to find triangle numbers and the natural way to write it is recursively:
function triangle (x)
if x == 0 then return 0 end
return x+triangle(x-1)
end
But attempting to calculate the first 100,000 triangle numbers fails with a stack overflow after a while. This is an ideal function to memoize, but I want a solution that will memoize any function I pass to it.
Mathematica has a particularly slick way to do memoization, relying on the fact that hashes and function calls use the same syntax:
triangle[0] = 0;
triangle[x_] := triangle[x] = x + triangle[x-1]
That's it. It works because the rules for pattern-matching function calls are such that it always uses a more specific definition before a more general definition.
Of course, as has been pointed out, this example has a closed-form solution: triangle[x_] := x*(x+1)/2. Fibonacci numbers are the classic example of how adding memoization gives a drastic speedup:
fib[0] = 1;
fib[1] = 1;
fib[n_] := fib[n] = fib[n-1] + fib[n-2]
Although that too has a closed-form equivalent, albeit messier: http://mathworld.wolfram.com/FibonacciNumber.html
I disagree with the person who suggested this was inappropriate for memoization because you could "just use a loop". The point of memoization is that any repeat function calls are O(1) time. That's a lot better than O(n). In fact, you could even concoct a scenario where the memoized implementation has better performance than the closed-form implementation!
You're also asking the wrong question for your original problem ;)
This is a better way for that case:
triangle(n) = n * (n - 1) / 2
Furthermore, supposing the formula didn't have such a neat solution, memoisation would still be a poor approach here. You'd be better off just writing a simple loop in this case. See this answer for a fuller discussion.
I bet something like this should work with variable argument lists in Lua:
local function varg_tostring(...)
local s = select(1, ...)
for n = 2, select('#', ...) do
s = s..","..select(n,...)
end
return s
end
local function memoize(f)
local cache = {}
return function (...)
local al = varg_tostring(...)
if cache[al] then
return cache[al]
else
local y = f(...)
cache[al] = y
return y
end
end
end
You could probably also do something clever with a metatables with __tostring so that the argument list could just be converted with a tostring(). Oh the possibilities.
In C# 3.0 - for recursive functions, you can do something like:
public static class Helpers
{
public static Func<A, R> Memoize<A, R>(this Func<A, Func<A,R>, R> f)
{
var map = new Dictionary<A, R>();
Func<A, R> self = null;
self = (a) =>
{
R value;
if (map.TryGetValue(a, out value))
return value;
value = f(a, self);
map.Add(a, value);
return value;
};
return self;
}
}
Then you can create a memoized Fibonacci function like this:
var memoized_fib = Helpers.Memoize<int, int>((n,fib) => n > 1 ? fib(n - 1) + fib(n - 2) : n);
Console.WriteLine(memoized_fib(40));
In Scala (untested):
def memoize[A, B](f: (A)=>B) = {
var cache = Map[A, B]()
{ x: A =>
if (cache contains x) cache(x) else {
val back = f(x)
cache += (x -> back)
back
}
}
}
Note that this only works for functions of arity 1, but with currying you could make it work. The more subtle problem is that memoize(f) != memoize(f) for any function f. One very sneaky way to fix this would be something like the following:
val correctMem = memoize(memoize _)
I don't think that this will compile, but it does illustrate the idea.
Update: Commenters have pointed out that memoization is a good way to optimize recursion. Admittedly, I hadn't considered this before, since I generally work in a language (C#) where generalized memoization isn't so trivial to build. Take the post below with that grain of salt in mind.
I think Luke likely has the most appropriate solution to this problem, but memoization is not generally the solution to any issue of stack overflow.
Stack overflow usually is caused by recursion going deeper than the platform can handle. Languages sometimes support "tail recursion", which re-uses the context of the current call, rather than creating a new context for the recursive call. But a lot of mainstream languages/platforms don't support this. C# has no inherent support for tail-recursion, for example. The 64-bit version of the .NET JITter can apply it as an optimization at the IL level, which is all but useless if you need to support 32-bit platforms.
If your language doesn't support tail recursion, your best option for avoiding stack overflows is either to convert to an explicit loop (much less elegant, but sometimes necessary), or find a non-iterative algorithm such as Luke provided for this problem.
function memoize (f)
local cache = {}
return function (x)
if cache[x] then
return cache[x]
else
local y = f(x)
cache[x] = y
return y
end
end
end
triangle = memoize(triangle);
Note that to avoid a stack overflow, triangle would still need to be seeded.
Here's something that works without converting the arguments to strings.
The only caveat is that it can't handle a nil argument. But the accepted solution can't distinguish the value nil from the string "nil", so that's probably OK.
local function m(f)
local t = { }
local function mf(x, ...) -- memoized f
assert(x ~= nil, 'nil passed to memoized function')
if select('#', ...) > 0 then
t[x] = t[x] or m(function(...) return f(x, ...) end)
return t[x](...)
else
t[x] = t[x] or f(x)
assert(t[x] ~= nil, 'memoized function returns nil')
return t[x]
end
end
return mf
end
I've been inspired by this question to implement (yet another) flexible memoize function in Lua.
https://github.com/kikito/memoize.lua
Main advantages:
Accepts a variable number of arguments
Doesn't use tostring; instead, it organizes the cache in a tree structure, using the parameters to traverse it.
Works just fine with functions that return multiple values.
Pasting the code here as reference:
local globalCache = {}
local function getFromCache(cache, args)
local node = cache
for i=1, #args do
if not node.children then return {} end
node = node.children[args[i]]
if not node then return {} end
end
return node.results
end
local function insertInCache(cache, args, results)
local arg
local node = cache
for i=1, #args do
arg = args[i]
node.children = node.children or {}
node.children[arg] = node.children[arg] or {}
node = node.children[arg]
end
node.results = results
end
-- public function
local function memoize(f)
globalCache[f] = { results = {} }
return function (...)
local results = getFromCache( globalCache[f], {...} )
if #results == 0 then
results = { f(...) }
insertInCache(globalCache[f], {...}, results)
end
return unpack(results)
end
end
return memoize
Here is a generic C# 3.0 implementation, if it could help :
public static class Memoization
{
public static Func<T, TResult> Memoize<T, TResult>(this Func<T, TResult> function)
{
var cache = new Dictionary<T, TResult>();
var nullCache = default(TResult);
var isNullCacheSet = false;
return parameter =>
{
TResult value;
if (parameter == null && isNullCacheSet)
{
return nullCache;
}
if (parameter == null)
{
nullCache = function(parameter);
isNullCacheSet = true;
return nullCache;
}
if (cache.TryGetValue(parameter, out value))
{
return value;
}
value = function(parameter);
cache.Add(parameter, value);
return value;
};
}
}
(Quoted from a french blog article)
In the vein of posting memoization in different languages, i'd like to respond to #onebyone.livejournal.com with a non-language-changing C++ example.
First, a memoizer for single arg functions:
template <class Result, class Arg, class ResultStore = std::map<Arg, Result> >
class memoizer1{
public:
template <class F>
const Result& operator()(F f, const Arg& a){
typename ResultStore::const_iterator it = memo_.find(a);
if(it == memo_.end()) {
it = memo_.insert(make_pair(a, f(a))).first;
}
return it->second;
}
private:
ResultStore memo_;
};
Just create an instance of the memoizer, feed it your function and argument. Just make sure not to share the same memo between two different functions (but you can share it between different implementations of the same function).
Next, a driver functon, and an implementation. only the driver function need be public
int fib(int); // driver
int fib_(int); // implementation
Implemented:
int fib_(int n){
++total_ops;
if(n == 0 || n == 1)
return 1;
else
return fib(n-1) + fib(n-2);
}
And the driver, to memoize
int fib(int n) {
static memoizer1<int,int> memo;
return memo(fib_, n);
}
Permalink showing output on codepad.org. Number of calls is measured to verify correctness. (insert unit test here...)
This only memoizes one input functions. Generalizing for multiple args or varying arguments left as an exercise for the reader.
In Perl generic memoization is easy to get. The Memoize module is part of the perl core and is highly reliable, flexible, and easy-to-use.
The example from it's manpage:
# This is the documentation for Memoize 1.01
use Memoize;
memoize('slow_function');
slow_function(arguments); # Is faster than it was before
You can add, remove, and customize memoization of functions at run time! You can provide callbacks for custom memento computation.
Memoize.pm even has facilities for making the memento cache persistent, so it does not need to be re-filled on each invocation of your program!
Here's the documentation: http://perldoc.perl.org/5.8.8/Memoize.html
Extending the idea, it's also possible to memoize functions with two input parameters:
function memoize2 (f)
local cache = {}
return function (x, y)
if cache[x..','..y] then
return cache[x..','..y]
else
local z = f(x,y)
cache[x..','..y] = z
return z
end
end
end
Notice that parameter order matters in the caching algorithm, so if parameter order doesn't matter in the functions to be memoized the odds of getting a cache hit would be increased by sorting the parameters before checking the cache.
But it's important to note that some functions can't be profitably memoized. I wrote memoize2 to see if the recursive Euclidean algorithm for finding the greatest common divisor could be sped up.
function gcd (a, b)
if b == 0 then return a end
return gcd(b, a%b)
end
As it turns out, gcd doesn't respond well to memoization. The calculation it does is far less expensive than the caching algorithm. Ever for large numbers, it terminates fairly quickly. After a while, the cache grows very large. This algorithm is probably as fast as it can be.
Recursion isn't necessary. The nth triangle number is n(n-1)/2, so...
public int triangle(final int n){
return n * (n - 1) / 2;
}
Please don't recurse this. Either use the x*(x+1)/2 formula or simply iterate the values and memoize as you go.
int[] memo = new int[n+1];
int sum = 0;
for(int i = 0; i <= n; ++i)
{
sum+=i;
memo[i] = sum;
}
return memo[n];