I ran into an unexpected code optimization recently, and wanted to check whether my interpretation of what I was observing was correct. The following is a much simplified example of the situation:
let demo =
let swap fst snd i =
if i = fst then snd else
if i = snd then fst else
i
[ for i in 1 .. 10000 -> swap 1 i i ]
let demo2 =
let swap (fst: int) snd i =
if i = fst then snd else
if i = snd then fst else
i
[ for i in 1 .. 10000 -> swap 1 i i ]
The only difference between the 2 blocks of code is that in the second case, I explicitly declare the arguments of swap as integers. Yet, when I run the 2 snippets in fsi with #time, I get:
Case 1 Real: 00:00:00.011, CPU: 00:00:00.000, GC gen0: 0, gen1: 0, gen2: 0
Case 2 Real: 00:00:00.004, CPU: 00:00:00.015, GC gen0: 0, gen1: 0, gen2: 0
i.e. the 2nd snippet runs 3 times faster than the first. The absolute performance difference here is obviously not an issue, but if I were to use the swap function a lot, it would pile up.
My assumption is that the reason for the performance hit is that in the first case, swap is generic and "requires equality", and checks whether int supports it, whereas the second case doesn't have to check anything. Is this the reason this is happening, or am I missing something else? And more generally, should I consider automatic generalization a double-edged sword, that is, an awesome feature which may have unexpected effects on performance?
I think this is generally the same case as in the question Why is this F# code so slow. In that question, the performance issue is caused by a constraint requiring comparison and in your case, it is caused by the equality constraint.
In both cases, the compiled generic code has to use interfaces (and boxing), while the specialized compiled code can directly use IL instructions for comparison or equality of integers or floating-point numbers.
The two ways to avoid the performance issue are:
Specialize the code to use int or float as you did
Mark the function as inline so that the compiler specializes it automatically
For smaller functions, the second approach is better, because it does not generate too much code and you can still write functions in a generic way. If you use function just for single type (by design), then using the first approach is probably appropriate.
The reason for the difference is that the compiler is generating calls to
IL_0002: call bool class [FSharp.Core]Microsoft.FSharp.Core.LanguagePrimitives/HashCompare::GenericEqualityIntrinsic<!!0> (!!0, !!0)
in the generic version, whilst the int version can just compare directly.
If you used inline I suspect that this problem would disappear as the compiler now has extra type information
Related
I am searching some SWI-Prolog function which is able to make some set union with variables as parameters inside. My aim is to make the union first and define the parameters at further on in source code.
Means eg. I have some function union and the call union(A, B, A_UNION_B) makes sense. Means further more the call:
union(A, [1,2], C), A=[3].
would give me as result
C = [3, 1, 2].
(What you call union/3 is most probably just concatenation, so I will use append/3 for keeping this answer short.)
What you expect is impossible without delayed goals or constraints. To see this, consider the following failure-slice
?- append(A, [1,2], C), false, A=[3].
loops, unexpected. % observed, but for us unexpected
false. % expected, but not the case
This query must terminate, in order to make the entire question useful. But there are infinitely many lists of different length for A. So in order to describe all possible solutions, we would need infinitely many answer substitutions, like
?- append(A, [1,2], C).
A = [], C = [1,2]
; A = [_A], C = [_A,1,2]
; A = [_A,_B], C = [_A,_B,1,2]
; A = [_A,_B,_C], C = [_A,_B,_C,1,2]
; ... .
The only way around is to describe that set of solutions with finitely many answers. One possibility could be:
?- when((ground(A);ground(C)), append(A,B,C)).
when((ground(A);ground(C)),append(A,B,C)).
Essentially it reads: Yes, the query is true, provided the query is true.
While this solves your exact problem, it will now delay many otherwise succeeding goals, think of A = [X], B = [].
A more elaborate version could provide more complex tests. But it would require a somehow different definition than append/3 is. Some systems like sicstus-prolog provide block declarations to make this more smoothly (SWI has a coarse emulation for that).
So it is possible to make this even better, but the question remains whether or not this makes much sense. After all, debugging delayed goals becomes more and more difficult with larger programs.
In many situations it is preferable to prevent this and produce an instantiation error in its stead as iwhen/2 does:
?- iwhen((ground(A);ground(C)),append(A,B,C)).
error(instantiation_error,iwhen/2).
That error is not the nicest answer possible, but at least it is not incorrect. It says: You need to provide more instantiations.
If you really want to solve this problem for the general case you have to delve into E-unification. That is an area with most trivial problem statements and extremely evolved answers. Often, just decidability is non-trivial let alone an effective algorithm. For your particular question, either ACI (for sets) or ANlr (for concatenation) are of interest. Where ACI requires solving Diophantine Equations and associative unification alone is even more complex than that. I am unaware of any such implementation for a Prolog system that solves the general problem.
Prolog IV offered an associative infix operator for concatenation but simply delayed more complex cases. So debugging these remains non-trivial.
I had a task where I wanted to find the closest string to a target (so, edit distance) without generating them all at the same time. I figured I'd use the high water mark technique (low, I guess) while initializing the closest edit distance to Inf so that any edit distance is closer:
use Text::Levenshtein;
my #strings = < Amelia Fred Barney Gilligan >;
for #strings {
put "$_ is closest so far: { longest( 'Camelia', $_ ) }";
}
sub longest ( Str:D $target, Str:D $string ) {
state Int $closest-so-far = Inf;
state Str:D $closest-string = '';
if distance( $target, $string ) < $closest-so-far {
$closest-so-far = $string.chars;
$closest-string = $string;
return True;
}
return False;
}
However, Inf is a Num so I can't do that:
Type check failed in assignment to $closest-so-far; expected Int but got Num (Inf)
I could make the constraint a Num and coerce to that:
state Num $closest-so-far = Inf;
...
$closest-so-far = $string.chars.Num;
However, this seems quite unnatural. And, since Num and Int aren't related, I can't have a constraint like Int(Num). I only really care about this for the first value. It's easy to set that to something sufficiently high (such as the length of the longest string), but I wanted something more pure.
Is there something I'm missing? I would have thought that any numbery thing could have a special value that was greater (or less than) all the other values. Polymorphism and all that.
{new intro that's hopefully better than the unhelpful/misleading original one}
#CarlMäsak, in a comment he wrote below this answer after my first version of it:
Last time I talked to Larry about this {in 2014}, his rationale seemed to be that ... Inf should work for all of Int, Num and Str
(The first version of my answer began with a "recollection" that I've concluded was at least unhelpful and plausibly an entirely false memory.)
In my research in response to Carl's comment, I did find one related gem in #perl6-dev in 2016 when Larry wrote:
then our policy could be, if you want an Int that supports ±Inf and NaN, use Rat instead
in other words, don't make Rat consistent with Int, make it consistent with Num
Larry wrote this post 6.c. I don't recall seeing anything like it discussed for 6.d.
{and now back to the rest of my first answer}
Num in P6 implements the IEEE 754 floating point number type. Per the IEEE spec this type must support several concrete values that are reserved to stand in for abstract concepts, including the concept of positive infinity. P6 binds the corresponding concrete value to the term Inf.
Given that this concrete value denoting infinity already existed, it became a language wide general purpose concrete value denoting infinity for cases that don't involve floating point numbers such as conveying infinity in string and list functions.
The solution to your problem that I propose below is to use a where clause via a subset.
A where clause allows one to specify run-time assignment/binding "typechecks". I quote "typecheck" because it's the most powerful form of check possible -- it's computationally universal and literally checks the actual run-time value (rather than a statically typed view of what that value can be). This means they're slower and run-time, not compile-time, but it also makes them way more powerful (not to mention way easier to express) than even dependent types which are a relatively cutting edge feature that those who are into advanced statically type-checked languages tend to claim as only available in their own world1 and which are intended to "prevent bugs by allowing extremely expressive types" (but good luck with figuring out how to express them... ;)).
A subset declaration can include a where clause. This allows you to name the check and use it as a named type constraint.
So, you can use these two features to get what you want:
subset Int-or-Inf where Int:D | Inf;
Now just use that subset as a type:
my Int-or-Inf $foo; # ($foo contains `Int-or-Inf` type object)
$foo = 99999999999; # works
$foo = Inf; # works
$foo = Int-or-Inf; # works
$foo = Int; # typecheck failure
$foo = 'a'; # typecheck failure
1. See Does Perl 6 support dependent types? and it seems the rough consensus is no.
I've created an interprter for a simple language. It is AST based (to be more exact, an irregular heterogeneous AST) with visitors executing and evaluating nodes. However I've noticed that it is extremely slow compared to "real" interpreters. For testing I've ran this code:
i = 3
j = 3
has = false
while i < 10000
j = 3
has = false
while j <= i / 2
if i % j == 0 then
has = true
end
j = j+2
end
if has == false then
puts i
end
i = i+2
end
In both ruby and my interpreter (just finding primes primitively). Ruby finished under 0.63 second, and my interpreter was over 15 seconds.
I develop the interpreter in C++ and in Visual Studio, so I've used the profiler to see what takes the most time: the evaluation methods.
50% of the execution time was to call the abstract evaluation method, which then casts the passed expression and calls the proper eval method. Something like this:
Value * eval (Exp * exp)
{
switch (exp->type)
{
case EXP_ADDITION:
eval ((AdditionExp*) exp);
break;
...
}
}
I could put the eval methods into the Exp nodes themselves, but I want to keep the nodes clean (Terence Parr saied something about reusability in his book).
Also at evaluation I always reconstruct the Value object, which stores the result of the evaluated expression. Actually Value is abstract, and it has derived value classes for different types (That's why I work with pointers, to avoid object slicing at returning). I think this could be another reason of slowness.
How could I make my interpreter as optimized as possible? Should I create bytecodes out of the AST and then interpret bytecodes instead? (As far as I know, they could be much faster)
Here is the source if it helps understanding my problem: src
Note: I haven't done any error handling yet, so an illegal statement or an error will simply freeze the program. (Also sorry for the stupid "error messages" :))
The syntax is pretty simple, the currently executed file is in OTZ1core/testfiles/test.txt (which is the prime finder).
I appreciate any help I can get, I'm really beginner at compilers and interpreters.
One possibility for a speed-up would be to use a function table instead of the switch with dynamic retyping. Your call to the typed-eval is going through at least one, and possibly several, levels of indirection. If you distinguish the typed functions instead by name and give them identical signatures, then pointers to the various functions can be packed into an array and indexed by the type member.
value (*evaltab[])(Exp *) = { // the order of functions must match
Exp_Add, // the order type values
//...
};
Then the whole switch becomes:
evaltab[exp->type](exp);
1 indirection, 1 function call. Fast.
I am working on fairly large Mathematica projects and the problem arises that I have to intermittently check numerical results but want to easily revert to having all my constructs in analytical form.
The code is fairly fluid I don't want to use scoping constructs everywhere as they add work overhead. Is there an easy way for identifying and clearing all assignments that are numerical?
EDIT: I really do know that scoping is the way to do this correctly ;-). However, for my workflow I am really just looking for a dirty trick to nix all numerical assignments after the fact instead of having the foresight to put down a Block.
If your assignments are on the top level, you can use something like this:
a = 1;
b = c;
d = 3;
e = d + b;
Cases[DownValues[In],
HoldPattern[lhs_ = rhs_?NumericQ] |
HoldPattern[(lhs_ = rhs_?NumericQ;)] :> Unset[lhs],
3]
This will work if you have a sufficient history length $HistoryLength (defaults to infinity). Note however that, in the above example, e was assigned 3+c, and 3 here was not undone. So, the problem is really ambiguous in formulation, because some numbers could make it into definitions. One way to avoid this is to use SetDelayed for assignments, rather than Set.
Another alternative would be to analyze the names in say Global' context (if that is the context where your symbols live), and then say OwnValues and DownValues of the symbols, in a fashion similar to the above, and remove definitions with purely numerical r.h.s.
But IMO neither of these approaches are robust. I'd still use scoping constructs and try to isolate numerics. One possibility is to wrap you final code in Block, and assign numerical values inside this Block. This seems a much cleaner approach. The work overhead is minimal - you just have to remember which symbols you want to assign the values to. Block will automatically ensure that outside it, the symbols will have no definitions.
EDIT
Yet another possibility is to use local rules. For example, one could define rule[a] = a->1; rule[d]=d->3 instead of the assignments above. You could then apply these rules, extracting them as say
DownValues[rule][[All, 2]], whenever you want to test with some numerical arguments.
Building on Andrew Moylan's solution, one can construct a Block like function that would takes rules:
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
You can then save your numeric rules in a variable, and use BlockRules[ savedrules, code ], or even define a function that would apply a fixed set of rules, kind of like so:
In[76]:= NumericCheck =
Function[body, BlockRules[{a -> 3, b -> 2`}, body], HoldAll];
In[78]:= a + b // NumericCheck
Out[78]= 5.
EDIT In response to Timo's comment, it might be possible to use NotebookEvaluate (new in 8) to achieve the requested effect.
SetAttributes[BlockRules, HoldRest]
BlockRules[rules_, expr_] :=
Block ## Append[Apply[Set, Hold#rules, {2}], Unevaluated[expr]]
nb = CreateDocument[{ExpressionCell[
Defer[Plot[Sin[a x], {x, 0, 2 Pi}]], "Input"],
ExpressionCell[Defer[Integrate[Sin[a x^2], {x, 0, 2 Pi}]],
"Input"]}];
BlockRules[{a -> 4}, NotebookEvaluate[nb, InsertResults -> "True"];]
As the result of this evaluation you get a notebook with your commands evaluated when a was locally set to 4. In order to take it further, you would have to take the notebook
with your code, open a new notebook, evaluate Notebooks[] to identify the notebook of interest and then do :
BlockRules[variablerules,
NotebookEvaluate[NotebookPut[NotebookGet[nbobj]],
InsertResults -> "True"]]
I hope you can make this idea work.
These languages do not support mutually recursive functions optimization 'natively', so I guess it must be trampoline or.. heh.. rewriting as a loop) Do I miss something?
UPDATE: It seems that I did lie about FSharp, but I just didn't see an example of mutual tail-calls while googling
First of all, F# supports mutually recursive functions natively, because it can benefit from the tailcall instruction that's available in the .NET IL (MSDN). However, this is a bit tricky and may not work on some alternative implementations of .NET (e.g. Compact Frameworks), so you may sometimes need to deal with this by hand.
In general, I that there are a couple of ways to deal with it:
Trampoline - throw an exception when the recursion depth is too high and implement a top-level loop that handles the exception (the exception would carry information to resume the call). Instead of exception you can also simply return a value specifying that the function should be called again.
Unwind using timer - when the recursion depth is too high, you create a timer and give it a callback that will be called by the timer after some very short time (the timer will continue the recursion, but the used stack will be dropped).
The same thing could be done using a global stack that stores the work that needs to be done. Instead of scheduling a timer, you would add function to the stack. At the top-level, the program would pick functions from the stack and run them.
To give a specific example of the first technique, in F# you could write this:
type Result<´T> =
| Done of ´T
| Call of (unit -> ´T)
let rec factorial acc n =
if n = 0 then Done acc
else Call(fun () -> factorial (acc * n) (n + 1))
This can be used for mutually recursive functions as well. The imperative loop would simply call the f function stored in Call(f) until it produces Done with the final result. I think this is probably the cleanest way to implement this.
I'm sure there are other sophisticated techniques for dealing with this problem, but those are the two I know about (and that I used).
On Scala 2.8, scala.util.control.TailCalls:
import scala.util.control.TailCalls._
def isEven(xs: List[Int]): TailRec[Boolean] = if (xs.isEmpty)
done(true)
else
tailcall(isOdd(xs.tail))
def isOdd(xs: List[Int]): TailRec[Boolean] = if (xs.isEmpty)
done(false)
else
tailcall(isEven(xs.tail))
isEven((1 to 100000).toList).result
Just to have the code handy for when you Bing for F# mutual recursion:
let rec isOdd x =
if x = 1 then true else isEven (x-1)
and isEven x =
if x = 0 then true else isOdd (x-1)
printfn "%A" (isEven 10000000)
This will StackOverflow if you compile without tail calls (the default in "Debug" mode, which preserves stacks for easier debugging), but run just fine when compiled with tail calls (the default in "Release" mode). The compiler does tail calls by default (see the --tailcalls option), and .NET implementations on most platforms honor it.