Is optimization necessary when code generation is targeting a runtime with JIT? - optimization

I'm planning on writing a programming language targeting the .NET platform which led me to start thinking about the code generation aspect of targeting such a platform. I'm new at writing compilers but I know there is optimization done as one of the phases in compiling (or there can be). I started to wonder about the any benefit to spending time optimizing the output (in this case CIL but this would apply to the JVM too) because the JIT compiler and things like the JVM's HotSpot could optimize at run time. Is there any benefit from optimizing the generated code (CIL or the JVM equivalent) when targeting .NET or JVM since the JIT will already optimize?

It depends. There are countless optimizations. Any given compiler (your compiler, the JIT compiler, or any other compiler) necessarily implements only a subset of those. This choice depends on available time, typical/expected input code, priorities, etc. and therefore the engineers who built the JIT compiler may have selected optimizations which work well for the programs they were expecting, but not so well for the kind of program you care about.
You will have to determine what optimizations the JIT compiler misses. The way to do this is, of course, empirical: Actually write programs, letting the JIT compiler optimize them (be sure to do this part properly - disable debugging, compile for release, choose realistic benchmarks, etc.), and then inspect the final machine code. Look for unexpected code (you will, of course, need assembly knowledge for this) and determine if it's a missed optimization or if the JIT was smarter than you thought.
If it is a missed optimization, you have another problem: You can't output the machine code you want, you have to generate different IL instead.
A missed optimization is probably due to a language feature the VM doesn't know (e.g. multi methods on the JVM). You lowered it into the VM's terms during compilation but the translation you chose doesn't sit well with the JIT's order of passes, heuristics, etc.
As you can't just output machine code yourself, you must now find an alternative IL fragment for the same input language code. Ideally, one which the JIT compiler does handle well. Finding that may be an exercise in imagination, but it's not technically hard, just guesswork interleaved with benchmarking.
As another answer points out, JIT compilers work under time constraints. This may lead to optimizations that could happen being missed (e.g. constant propagation running out of time), but as the creators of the JIT compiler faced the same problem, this probably isn't too severe if you don't create much larger/more complicated code.
If you create such bad code that the JIT compiler can't fix it all, then you have to duplicate its optimizations in your AOT compiler. I'm not convinced that this is a likely scenario though, and even if it happens even very simple optimizations should mostly fix the problem.
So, in summary: Start with a straightforward translation, then seek out missed optimizations and either make it easier to optimize for the JIT compiler, or do it yourself (if possible - adaptive optimization is much harder in an AOT setting).

I think this question is hard to answer in general.
For example, the F# compiler performs a tail call optimization, because having tail-recursive functions is common in that language, the F# compiler can do a better job at optimizing them in some cases than the JIT compiler and some versions of the JIT compiler don't perform the optimization at all.
So, your language might have some common operation whose straightforward implementation wouldn't perform well. In that case, it makes sense emitting IL code that's optimized.
What I think you should do is the same as when you're writing a normal program: first write your code in a way that is simple and readable. Only if something doesn't perform well, attempt to optimize that. It might be worth considering that you might need some optimizations in the future and make your code modular enough, so that you don't have to rewrite half of it because of some optimization. But for now, that should be enough.
Writing a compiler is hard enough job already (even if you're targeting an IL). Finish it first and think about optimizations later.

Generally, JIT compilers have some thresholds governing how much optimization they will attempt to perform. These may be based on the size of a method's IL and/or the amount of time already spent JIT compiling the method. So yes, IL which has already been optimized may benefit from further JIT optimization. As always, there is a trade-off: how much time do you want to spend adding AOT optimizations to your compiler (and testing/maintaining them) versus how quickly your code can be JIT compiled, and with what level of optimization.
The magnitude of the improvement depends largely on how much simpler (and smaller) the AOT-optimized IL is relative to the unoptimized IL, as well as the thresholds governing the JIT compiler (which, at least for the Microsoft CLR, are not widely known). The only way to find out is to do some testing yourself.

Related

What is Kotlin's impact on JVM performance?

I'm aware that functional programming/lambdas aren't the best option on the Java world when extreme high performance is the goal. Kotlin address to that issue with the inline keyword.
When Kotlin is compiled and lambdas are inlined, it is actually creating bigger methods and JIT has a hard cap of N bytes to inline to native code. With that in mind, doesn't Kotlin's inline hurts JIT's inlining and therefore affects performance?
Also, I noticed that Kotlin adds A LOT of nullchecks to the compiled code, those checks are really small methods and definitely are inlined by JIT, but due to the amount of calls, couldn't that also be a performance issue?
So, if you are aiming to the highest possible performance, what is Kotlin's impact on JVM?
Side node: I know, I know.. "over optimization is the evil of all roots", "you shouldn't care for performance at this level".
You can override the limit of what inlining value to use for bytecode. See the answer at What is the size of methods that JIT automatically inlines? for more details.
I’m surprised there’s lots of null checks in the bytecode though. Normally Java doesn’t do that and instead lets the CPU handle those automatically and with higher performance.
They may be elided by the JVM’s JIT though so perhaps won’t be an issue once warmed up, but if you want the higher performance you probably want to put your server through dummy data runs to ensure that the JIT is warmed up. There’s nothing worse than a warm JIT hitting a compilation event and having to go through it all again.
If you look out for the JIT compilation events and look out for “hot method” messages not bring compiled or online then you can tweak those methods individually.
Generally though, if you avoid lambda then you’ll find the JVM won’t need to do so much inlining, and worse won’t need to do as much allocating. Lambdas are pretty terrible at accidentally capturing state and causing allocations to occur. Ideally in a high throughput/low latency system you want to avoid as much allocation as possible.

How compiled language is better than interpreted language in optimizing the hardware?

Specifically how is compiled language able to better optimize the hardware compared to interpreted language? Other online sources that I have read only gave vague explanations like because it is written in the native code of the target machine while some do not even offer explanation at all. Would appreciate if the explanation provided can be as "Layman" as possible given that I've only just started to code.
One major reason is optimizing compilers. Compiling "in advance" makes it much easier to apply optimizations to code, especially if you're compiling to native assembly code (as you typically do in C, for example). The fact that you know some stuff about the machine that it's going to be deployed on allows you to do machine-specific optimizations. This is especially important for, for example, Pentium-based processors, which have numerous complicated instructions that would tend to require some degree of knowledge of program structure in order to use (e.g. the MMX instruction set).
There are also some cases where the compiler can make structural changes to programs. For example, under special circumstances, some compilers can replace recursion with loops. (I once heard of someone writing a recursive Factorial function in C to learn about how to implement recursion in assembly language only to realize to his horror that the compiler had recognized an optimization and replaced his recursion with a for loop).

Are there any reasons to compile without optimizations?

In most projects I don't see any -Ox flags, which you think would be standard for every project, since it could dramatically increase the speed of the program.
Are there any specific reasons to not compile with -Ox or it's non-gcc counterparts?
It's much easier to debug an unoptimised program, because the object code tends to be a more direct translation of the source code. With optimisation enabled the compiler might re-order statements or eliminate them entirely by coalescing several operations into one. That means when debugging a program (or a core dump) there isn't such a direct mapping from a position in the program image to a line of source code.
GCC 4.8 adds a new optimisation level that is a great compromise between performance and debuggability:
A new general optimization level, -Og, has been introduced. It addresses the need for fast compilation and a superior debugging experience while providing a reasonable level of runtime performance. Overall experience for development should be better than the default optimization level -O0.
With -Og the compiler does simple optimisations that don't make it harder to debug and don't take too long to compile, so the code performs better than completely unoptimised code but can still be debugged.
One reason is if you want to step through the code with a debugger. If optimizations are enabled statements can be reordered or missed out and the execution of the program will not have to follow what is in the source code step-by-step. Values of variables may not be available because they have been optimized out. Thus it is hard to reason about what the program is doing as you run it in the debugger.
Another reason for avoiding optimization is that you have used undefined behaviour in your program, and optimization could cause your program to break. (In fact, that is a reason for using optimization - to find such bugs.)
This makes Debug more accurate without optimization.
Unused variables, redundant statements will not be skipped.
On a somewhat older project, all our tests were done with debug build, including a lot of print statements. For the final build, delivered to the customer, it was decided not to use a less-tested retail build (using gcc's full optimization options), since that could introduce timing related issues (actually, the finding of defects now masked due to the specific timing of the debug builds), and since the customer was happy enough with the current speed of operation.
On my current project, a lot of code is to be placed in ROM (initially: all), and then we obviously don't want to remove dead code, as future updates - to be placed in ram - can then still make use of the rom code, reducing space requirements in ram.
Also, what would be the default? Optimize for space, or for execution time? Not choosing is the only right choice to make.

Can compiler optimization introduce bugs?

Today I had a discussion with a friend of mine and we debated for a couple of hours about "compiler optimization".
I defended the point that sometimes, a compiler optimization might introduce bugs or at least, undesired behavior.
My friend totally disagreed, saying that "compilers are built by smart people and do smart things" and thus, can never go wrong.
He didn't convince me at all, but I have to admit I lack of real-life examples to strengthen my point.
Who is right here? If I am, do you have any real-life example where a compiler optimization produced a bug in the resulting software? If I'm mistaking, should I stop programming and learn fishing instead?
Compiler optimizations can introduce bugs or undesirable behaviour. That's why you can turn them off.
One example: a compiler can optimize the read/write access to a memory location, doing things like eliminating duplicate reads or duplicate writes, or re-ordering certain operations. If the memory location in question is only used by a single thread and is actually memory, that may be ok. But if the memory location is a hardware device IO register, then re-ordering or eliminating writes may be completely wrong. In this situation you normally have to write code knowing that the compiler might "optimize" it, and thus knowing that the naive approach doesn't work.
Update: As Adam Robinson pointed out in a comment, the scenario I describe above is more of a programming error than an optimizer error. But the point I was trying to illustrate is that some programs, which are otherwise correct, combined with some optimizations, which otherwise work properly, can introduce bugs in the program when they are combined together. In some cases the language specification says "You must do things this way because these kinds of optimizations may occur and your program will fail", in which case it's a bug in the code. But sometimes a compiler has a (usually optional) optimization feature that can generate incorrect code because the compiler is trying too hard to optimize the code or can't detect that the optimization is inappropriate. In this case the programmer must know when it is safe to turn on the optimization in question.
Another example:
The linux kernel had a bug where a potentially NULL pointer was being dereferenced before a test for that pointer being null. However, in some cases it was possible to map memory to address zero, thus allowing the dereferencing to succeed. The compiler, upon noticing that the pointer was dereferenced, assumed that it couldn't be NULL, then removed the NULL test later and all the code in that branch. This introduced a security vulnerability into the code, as the function would proceed to use an invalid pointer containing attacker-supplied data. For cases where the pointer was legitimately null and the memory wasn't mapped to address zero, the kernel would still OOPS as before. So prior to optimization the code contained one bug; after it contained two, and one of them allowed a local root exploit.
CERT has a presentation called "Dangerous Optimizations and the Loss of Causality" by Robert C. Seacord which lists a lot of optimizations that introduce (or expose) bugs in programs. It discusses the various kinds of optimizations that are possible, from "doing what the hardware does" to "trap all possible undefined behaviour" to "do anything that's not disallowed".
Some examples of code that's perfectly fine until an aggressively-optimizing compiler gets its hands on it:
Checking for overflow
// fails because the overflow test gets removed
if (ptr + len < ptr || ptr + len > max) return EINVAL;
Using overflow artithmetic at all:
// The compiler optimizes this to an infinite loop
for (i = 1; i > 0; i += i) ++j;
Clearing memory of sensitive information:
// the compiler can remove these "useless writes"
memset(password_buffer, 0, sizeof(password_buffer));
The problem here is that compilers have, for decades, been less aggressive in optimization, and so generations of C programmers learn and understand things like fixed-size twos complement addition and how it overflows. Then the C language standard is amended by compiler developers, and the subtle rules change, despite the hardware not changing. The C language spec is a contract between the developers and compilers, but the terms of the agreement are subject to change over time and not everyone understands every detail, or agrees that the details are even sensible.
This is why most compilers offer flags to turn off (or turn on) optimizations. Is your program written with the understanding that integers might overflow? Then you should turn off overflow optimizations, because they can introduce bugs. Does your program strictly avoid aliasing pointers? Then you can turn on the optimizations that assume pointers are never aliased. Does your program try to clear memory to avoid leaking information? Oh, in that case you're out of luck: you either need to turn off dead-code-removal or you need to know, ahead of time, that your compiler is going to eliminate your "dead" code, and use some work-around for it.
When a bug goes away by disabling optimizations, most of the time it's still your fault
I am responsible for a commercial app, written mostly in C++ - started with VC5, ported to VC6 early, now successfully ported to VC2008. It grew to over 1 Million lines in the last 10 years.
In that time I could confirm a single code generation bug thast occured when agressive optimizations where enabled.
So why am I complaining? Because in the same time, there were dozens of bugs that made me doubt the compiler - but it turned out to be my insufficient understanding of the C++ standard. The standard makes room for optimizations the compiler may or may not make use of.
Over the years on different forums, I've seen many posts blaming the compiler, ultimately turning out to be bugs in the original code. No doubt many of them obscure bugs that need a detailed understanding of concepts used in the standard, but source code bugs nonetheless.
Why I reply so late: stop blaming the compiler before you have confirmed it's actually the compiler's fault.
Compiler (and runtime) optimization can certainly introduce undesired behaviour - but it at least should only happen if you're relying on unspecified behaviour (or indeed making incorrect assumptions about well-specified behaviour).
Now beyond that, of course compilers can have bugs in them. Some of those may be around optimisations, and the implications could be very subtle - indeed they're likely to be, as obvious bugs are more likely to be fixed.
Assuming you include JITs as compilers, I've seen bugs in released versions of both the .NET JIT and the Hotspot JVM (I don't have details at the moment, unfortunately) which were reproducible in particularly odd situations. Whether they were due to particular optimisations or not, I don't know.
To combine the other posts:
Compilers do occasionally have bugs in their code, like most software. The "smart people" argument is completely irrelevant to this, as NASA satellites and other apps built by smart people also have bugs. The coding that does optimization is different coding from that which doesn't, so if the bug happens to be in the optimizer then indeed your optimized code may contain errors while your non-optimized code will not.
As Mr. Shiny and New pointed out, it's possible for code that is naive with regard to concurrency and/or timing issues to run satisfactorily without optimization yet fail with optimization as this may change the timing of execution. You could blame such a problem on the source code, but if it will only manifest when optimized, some people might blame optimization.
Just one example: a few days ago, someone discovered that gcc 4.5 with the option -foptimize-sibling-calls (which is implied by -O2) produces an Emacs executable that segfaults on startup.
This has apparently been fixed since.
I've never heard of or used a compiler whose directives could not alter the behaviour of a program. Generally this is a good thing, but it does require you to read the manual.
AND I had a recent situation where a compiler directive 'removed' a bug. Of course, the bug is really still there but I have a temporary workaround until I fix the program properly.
Yes. A good example is the double-checked locking pattern. In C++ there is no way to safely implement double-checked locking because the compiler can re-order instructions in ways that make sense in a single-threaded system but not in a multi-threaded one. A full discussion can be found at http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
Is it likely? Not in a major product, but it's certainly possible. Compiler optimizations are generated code; no matter where code comes from (you write it or something generates it), it can contain errors.
I encountered this a few times with a newer compiler building old code. The old code would work but relied on undefined behavior in some cases, like improperly defined / cast operator overload. It would work in VS2003 or VS2005 debug build, but in release it would crash.
Opening up the assembly generated it was clear that the compiler had just removed 80% of the functionality of the function in question. Rewriting the code to not use undefined behavior cleared it up.
More obvious example: VS2008 vs GCC
Declared:
Function foo( const type & tp );
Called:
foo( foo2() );
where foo2() returns an object of class type;
Tends to crash in GCC because the object isn't allocated on the stack in this case, but VS does some optimization to get around this and it will probably work.
Aliasing can cause problems with certain optimizations, which is why compilers have an option to disable those optimizations. From Wikipedia:
To enable such optimizations in a predictable manner, the ISO standard for the C programming language (including its newer C99 edition) specifies that it is illegal (with some exceptions) for pointers of different types to reference the same memory location. This rule, known as "strict aliasing", allows impressive increases in performance[citation needed], but has been known to break some otherwise valid code. Several software projects intentionally violate this portion of the C99 standard. For example, Python 2.x did so to implement reference counting,[1] and required changes to the basic object structs in Python 3 to enable this optimisation. The Linux kernel does this because strict aliasing causes problems with optimization of inlined code.[2] In such cases, when compiled with gcc, the option -fno-strict-aliasing is invoked to prevent unwanted or invalid optimizations that could produce incorrect code.
Yes, compiler optimizations can be dangerous. Usually hard real-time software projects forbids optimizations for this very reason. Anyway, do you know of any software with no bugs?
Aggressive optimizations may cache or even do strange assumptions with your variables. The problem is not only with the stability of your code, but also they can fool your debugger. I have seen several times a debugger failing to represent the memory contents because some optimizations retained a variable value within the registers of the micro
The very same thing can happen to your code. The optimization puts a variable into a register and do not write to the variable until it has finished. Now imagine how different things can be if your code has pointers to variables in your stack and it has several threads
It's theoretically possible, sure. But if you don't trust the tools to do what they are supposed to do, why use them? But right away, anyone arguing from the position of
"compilers are built by smart people
and do smart things" and thus, can
never go wrong.
is making a foolish argument.
So, until you have reason to believe that a compiler is doing so, why posture about it?
I certainly agree that it's silly to say the because compilers are written by "smart people" that they are therefore infallible. Smart people designed the Hindenberg and the Tacoma Narrows Bridge, too. Even if it's true that compiler-writers are among the smartest programmers out there, it's also true that compilers are among the most complex programs out there. Of course they have bugs.
On the other hand, experience tells us that the reliability of commercial compilers is very high. I've had many many times that someone told me that the reason why is program doesn't work MUST be because of a bug in the compiler because he has checked it very carefully and he is sure that it is 100% correct ... and then we find that in fact the program has an error and not the compiler. I'm trying to think of times that I've personally run across something that I was truly sure was an error in the compiler, and I can only recall one example.
So in general: Trust your compiler. But are they ever wrong? Sure.
It can happen. It has even affected Linux.
As I recall, early Delphi 1 had a bug where the results of Min and Max were reversed. There was also an obscure bug with some floating point values only when the floating point value was used within a dll. Admittedly, it has been more than a decade, so my memory may be a bit fuzzy.
I have had a problem in .NET 3.5 if you build with optimization, add another variable to a method which is named similarly to an existing variable of the same type in the same scope then one of the two (new or old variable) will not be valid at runtime and all references to the invalid variable are replaced with references to the other.
So, for example, if I have abcd of MyCustomClass type and I have abdc of MyCustomClass type and I set abcd.a=5 and abdc.a=7 then both variables will have property a=7. To fix the issue both variables should be removed, the program compiled (hopefully without errors) then they should be re-added.
I think I have run into this problem a few times with .NET 4.0 and C# when doing Silverlight applications also. At my last job we ran into the problem quite often in C++. It might have been because the compilations took 15 minutes so we would only build the libraries we needed, but sometimes the optimized code was exactly the same as the previous build even though new code had been added and no build errors had been reported.
Yes, code optimizers are built by smart people. They are also very complicated so having bugs is common. I suggest fully testing any optimized release of a large product. Usually limited use products are not worth a full release, but they should still be generally tested to make sure they perform their common tasks correctly.
Compiler optimization can reveal (or activate) dormant (or hidden) bugs in your code. There may be a bug in your C++ code that you don't know of, that you just don't see it. In that case, it is a hidden or dormant bug, because that branch of the code is not executed [enough number of times].
The likelihood of a bug in your code is much bigger (thousands of times more) than a bug in the compiler's code: Because the compilers are tested extensively. By TDD plus practically by all people who have use them since their release!). So it is virtually unlikely that a bug is discovered by you and not discovered by literally hundreds of thousands of times it is used by other people.
A dormant bug or hidden bug is just a bug that is not revealed itself to the programmer yet. People who can claim that their C++ code does not have (hidden) bugs are very rare. It requires C++ knowledge (very few can claim for that) and extensive testing of the code. It is not just about the programmer, but about the code itself (the style of development). Being bug-prone is in the character of the code (how rigorously it is tested) or/and the programmer (how disciplined is in test and how well knows C++ and programming).
Security+Concurrency bugs: This is even worse if we include concurrency and security as bugs. But after all, these 'are' bugs. Writing a code that is in the first place bug-free in terms of concurrency and security is almost impossible. That's why there is always already a bug in the code, which can be revealed (or forgotten) in compiler optimization.
More, and more aggressive optimizations could be enabled if the program you compile has a good testing suite. Then it is possible to run that suite and be somewhat more sure the program operates correctly. Also, you can prepare your own tests that match closely that do you plan to do in production.
It is also true that any large program may have (and probably indeed has) some bugs independently on which switches do you use to compile it.
I work on a large engineering application, and every now and then we see release only crashes and other problems reported by clients. Our code has 37 files (out of around
6000) where we have this at the top of the file, to turn off optimization to fix such crashes:
#pragma optimize( "", off)
(We use Microsoft Visual C++ native, 2015, but it is true for just about any compiler, except maybe Intel Fortran 2016 update 2 where we have not yet turned of any optimizations.)
If you search through the Microsoft Visual Studio feedback site you can find some optimization bugs there as well. We occasionally log some of ours (if you can reproduce it easily enough with a small section of code and you are willing to take the time) and they do get fixed, but sadly others get introduced again. smiles
Compilers are programs written by people, and any big program has bugs, trust me on that. The compiler optimization options most certainly has bugs and turning on optimization can certainly introduce bugs in your program.
Everything that you can possibly imagine doing with or to a program will introduce bugs.
Because of exhaustive testing and the relative simplicity of actual C++ code (C++ has under 100 keywords / operators) compiler bugs are relatively rare. Bad programming style often is the only thing encounters them. And usually the compiler will crash or produce an internal compiler error instead. The only exception to this rule is GCC. GCC, especially older versions, had a lot of experimental optimizations enabled in O3 and sometimes even the other O levels. GCC also targets so many backends that this leaves more room for bugs in their intermediate representation.
I had a problem with .net 4 yesterday with something that looks like...
double x=0.4;
if(x<0.5) { below5(); } else { above5(); }
And it would call above5(); But if I actually use x somewhere, it would call below5();
double x=0.4;
if(x<0.5) { below5(); } else { System.Console.Write(x); above5(); }
Not the exact same code but similar.

How do modern optimizing compilers determine when to optimize?

How do modern optimizing compilers determine when to apply certain optimizations such as loop unrolling and code inlining?
Since both of these affect caching, naively inlining functions with less than X lines, or whatever other simple heuristic, is likely to generate worse performing code. So, how do modern compilers deal with this?
I'm having a hard time finding information on this (especially information thats reasonably easy to understand..), about the best I could find is the wikipedia article. Any details, links to books/articles/papers are greatly appreciated!
EDIT: Since answers are talking mainly about the two optimizations I mentioned (inlining and loop unrolling) I just wanted to clarify that I'm interested in all and any compiler optimizations, not just those two. I'm also more interested in the optimizations which can be performed during ahead-of-time compilation, though JIT optimization is of interest too (though to a slightly lesser extent).
Thanks!
Usually by being that naive anyway and hope it is an improvement.
This is why just-in-time compilation is such a winning strategy. Collect statistics then optimize for the common case.
References:
http://lambda-the-ultimate.org/node/768
GCC supports Profile Guided Optimization
And of course the Sun hotspot JVM
You can look a the Spiral project.
On top of that, optimizing is a tough thing to do generically. This is, in part, why there are so many options to the gcc compiler. If you know something about cache and pages you can do some things by hand and request that others be done through the compiler but no two machines are the same so the approach must be adhoc.
For short: Better than we!
You can have a look at this: http://www.linux-kongress.org/2009/slides/compiler_survey_felix_von_leitner.pdf
Didier
Good question. You are asking about so called speculative optimizations.
Dynamic compilers use both static heuristics and profile information. Static compilers employs heuristics and (off-line) profile information. The last is often referenced as PGO (Profile Guided Optimizations).
There is a lot of articles on inlining policies. The most comprehensive one is
An Empirical Study of Method Inlining for a Java Just-In-Time Compiler
It also contains references to related work and sharp criticism on some of the considered articles (justified).
In general, state-of-the-art compilers try to use impact analysis to estimate potential effect of speculative optimizations before applying them.
P.S. Loop unrolling is old classic stuff which helps only for some tight loops that performs only number crunchng ops (no calls and so on). Method inlining is much more important optimization in the modern compilers.