Does the --force-lto gem5 scons build option speed up simulation significantly and how does it compare to a gem5.fast build? - gem5

While looking for ways to speed up my simulation, I came across the --force-lto option.
I've heard about LTO (Link Time Optimization) before, so that made me wonder why isn't --force-lto the default while building gem5?
Would that make a simulation go much faster than a gem5.fast build compared to a gem5.opt build?

In gem5 fe15312aae8007967812350f8cdac9ad766dcff7 (2019), the gem5.fast build already enables LTO by default, so you generally never want to use that option explicitly, but rather want just gem5.opt.
Other things to also keep in about .fast:
it also removes -g and so you get no debug symbols. I wonder why, since that does not make runs any faster.
it also turns on NDEBUG, which has the standard library effect of disabling asserts entirely, but plus some gem5 specific effects spread throughout the code with #ifndef NDEBUG checks
it disables TRACING_ON, which makes DPRINTF and family become empty statements as seen at: src/base/trace.hh
Those effects can be seen easily at src/SConstruct.
That option exists because the more common gem5.opt build also uses partial linking, which in some versions of GCC was incompatible with LTO.
Therefore, as its the name suggests, --force-lto forces the use of LTO together with partial linking, which might not be stable. That's why I recommend that you use gem5.fast rather than touching --force-lto.
The goal of partial linking is presumably to speed up the link step, which can easily be the bottleneck in a "change on file, rebuild, relink, test" loop, although in my experiments it is not clear that it is efficient at doing that. Today it might just be a relic from the past.
To try to speed up linking, I recommend that you try scons --gold-linker instead, which uses the GOLD linker instead of ld. Note that this option was more noticeably effective for gem5.debug however.
I have found that gem5.fast is generally 20% faster than gem5.opt for Atomic CPUs.

Related

run LLVM opt hotcoldsplit issue

Hot cold splitting is an effective way for code optimization in LLVM.
This built-in LLVM pass is located at :
/llvm/lib/Transforms/IPO/HotColdSplitting.cpp
Actually, I want to use this pass to optimize my code but I didn't find any documentation on how to use this built-in pass to optimize my code .
I already know that I should use LLVM opt command to load the pass but I didn't find the proper way to apply this optimization pass on my program .
I have two questions so far :
1) How to use opt properly to load this pass to optimize my code
2) Can I use this pass directly on clang to optimize C/C++ code as switches like -fsanitize=address which applies to the underlying compiling program ?
Thanks.
You can pass the -mllvm -hot-cold-split=true flag to clang, which will enable hot/cold splitting pass in the optimizer when compiling your file.
Yes, in principle you can directly use this pass (as of the time of answering the question); hot/cold splitting in LLVM, in its current form, only optimizes for code size. Alternatively you might want to try first collecting profiling data via PGO, and then feeding the profiling data into clang for it to take advantage of profile information during the build (which might help hot/cold splitting in terms of performance).
Hot cold splitting can be used to optimize an app for startup performance, as well as for runtime performance in some cases. To enable hot cold splitting optimization you can pass the flag to llvm using -mllvm -hot-cold-split.
Hot cold splitting gives best performance improvement in the presence of profile data. Although it does optimize applications without profile data using inbuilt static analysis. For example: catch block, non returning functions are already known to be cold. Hot cold splitting uses these information.
Currently there is no direct flag from the clang frontend to enable this so you'll have to use -mllvm -hot-cold-split. For more details on hot cold splitting the youtube video at the llvm-dev is quite informative: https://www.youtube.com/watch?v=Q8rqGg6vHAE

Ways to make a D program faster

I'm working on a very demanding project (actually an interpreter), exclusively written in D, and I'm wondering what type of optimizations would generally be recommended. The project makes heavy use of GC, classes, asssociative arrays, and pretty much anything.
Regarding compilation, I've already experimented both with DMD and LDC flags and LDC with -flto=full -O3 -Os -boundscheck=off seems to be making a difference.
However, as rudimentary as this may sound, I would like you to suggest anything that comes to your mind that could help speed up the performance, related or not to the D language. (I'm sure I'm missing several things).
Compiler flags: I would add -mcpu=native if the program will be running on your machine. Not sure what effect -Os has in addition to -O3.
Profiling has been mentioned in comments. Personally under Linux I have a script which dumps a process's stack trace and I do that a few times to get an idea of where it's getting hung up on.
Not sure what you mean by GS.
Since you mentioned classes: in D, methods are virtual by default; virtual methods add indirections and are not inlineable. Make sure only those methods that must be virtual are. See if you can rewrite your program using a form of polymorphism that doesn't involve indirections, such as using template metaprogramming.
Since you mentioned associative arrays: these make heavy use of the GC; to speed them up, switch to a third-party library that works on top of std.allocator, such as https://github.com/dlang-community/containers
If some parts of your code are parallelizable, std.parallelism is a good tool for this.
Since you mentioned that the project is an interpreter: there are many avenues for optimizing them, up to JIT/AOT compilation. Perhaps you could link to an existing library such as LLVM or libjit.

Test performance of two stuffs, which flags should I use (with gcc) ? -O0, -O2, or -g?

When I write a routine to test the performance of two stuffs, which optimization flags should I use? -O0, -O2, or -g ?
You should test the performance of your code using each of the settings. Ideally the larger the number -O0, -O1, -O2, -O3, implies better performance as there is more/better optimization, but that is not always the case.
Likewise depending on how your code is written some of it may be removed in a way that you didnt expect from the language or the compiler or both. So not only do you need to test the performance of your code, you need to actually test the program generated from your code to see that it does what you think it does.
There is definitely not one optimization setting that provides the best performance for any code that can be compiled by that compiler. You have to test the settings and compiler on a particular system to verify that for that system the code does indeed run faster. How you test that performance is filled with many traps and other error producing problems that you can easily misunderstand the results. So you have to be careful in how you test your performance.
For gcc folks usually say -O3 is risky to use and -O2 is the best performance/safe. And for the most part that is the case -O2 is used enough to get many bugs flushed out. -O2 does not always produce the fastest code but it generally produces faster code that -O0 and -O1. Use of debuggers can defeat the optimization or remove it all together, so never test for performance with a debugger based build or using a debugger. Test on the system as the user would use the system, if the user uses a debugger when they run your program then test that way, otherwise dont.
In GCC -O0 disables compiler code optimizations at all. -g adds debugging info to executable so you can use debugger.
If you want to enable speed optimizations use flags -O1 or -O2. See man gcc(1) for more information.
If you want to measure performance of your code use profiler such as valgrind or gprof.
Actually, if you care about performance you should definitely use -O3. Why give away potential optimisations?
And yes, there’s a small but measurable difference between -O2 and -O3.
-g is not an optimisation flag but it can prevent optimisations so it must be disabled for representative benchmarks.

why is my code performing poorly when built with Realview tools but better with Codesourcery?

I have a C project which was previously being built with Codesourcery's gnu tool chain. Recently it was converted to use Realview's armcc compiler but the performance that we are getting with Realview tools is very poor compared to when it is compiled with gnu tools. Shouldnt it be opposite case i.e it should give better performance when compiled with Realview's tools? What am I missing here. How can I improve the performance with Realview's tools?
Also I have noticed that if I run the binary produced by Realview Tools with Lauterbach it crashes but If I run it using Realview ICE it runs fine.
UPDATE 1
Realview Command line:
armcc -c --diag_style=ide
--depend_format=unix_escaped --no_depend_system_headers --no_unaligned_access --c99 --arm_only --debug --gnu --cpu=ARM1136J-S --fpu=SoftVFP --apcs=/nointerwork -O3 -Otime
GNU GCC command line:
arm-none-eabi-gcc -mcpu=arm1136jf-s
-mlittle-endian -msoft-float -O3 -Wall
I am using Realview Tools version 4.1 and GCC version 4.4.1
UPDATE 2
Lauterbach issue has been solved. It was being caused because of Semihosting as the semihosting SWI was not being handled in Lauterbach environment. Retargeting the C library to avoid Semihosting did the trick and now my program runs successfully with Lauterbach as well as Realview ICE. But the performance issue is as it is.
Since you have optimisations on, and in some environments it crashes, it may be that your code uses undefined behaviour or other latent error. Such behaviour can change with optimisation, or even break altogether.
I suggest that you try both tool-chains without optimisation, and make sure that the warning level is set high, and you fix them all. GCC is far better that armcc at error checking so is a reasonable static analysis check. If the code builds clean it is more likely to work and may be easier for the optimiser to handle.
Have you tried removing the '--no_unaligned_access'? ARM11s can typically do unaligned access (if enabled in the startup code) and forcing the compiler/library to not do them may be slowing down your code.
The current version of RVCT says of '--fpu=SoftVFP':
In previous releases of RVCT, if you
specified --fpu=softvfp and a CPU with
implicit VFP hardware, the linker
chose a library that implemented the
software floating-point calls using
VFP instructions. This is no longer
the case. If you require this legacy
behavior, use --fpu=softvfp+vfp.
This suggests to me that if you perhaps have an old version of RVCT the behaviour will be to use software floating point regardless of the presence of hardware floating point. While in the GNU version -msoft-float will use hardware floating point instructions when an FPU is available.
So what version of RVCT are you using?
Either way I suggest that you remove the --fpu option since the compiler will make an implicit appropriate selection based on the --cpu option selected. You also need to correct the CPU selection, your RVCT option says --cpu=ARM1136J-S not ARM1136FJ-S as you told GCC. This will no doubt prevent the compiler from generating VFP instructions, since you told it it has no VFP.
The same source code can produce dramatically different binaries due to factors like. Different compilers (llvm vs gcc, gcc 4 vs gcc3, etc). Different versions of the same compiler. Different compiler options if the same compiler. Optimization (on either compiler). Compiled for release or debug (or whatever terms you want to use, the binaries are quite different). When going embedded, you add in the complication of a bootloader or rom monitor (debugger) and things like that. Then add to that the host side tools that talk to the rom monitor or compiled in debugger. Despite being a far better compiler than gcc, arm compilers were infected with the assumption that the binaries would always be run on top of their rom monitor. I want to remember that by the time rvct became their primary compiler that assumption was on its way out, but I have not really used their tools since then.
The bottom line is there are a handful of major factors that can affect the differences between binaries that can and will lead to a different experience. Assuming that you will get the same performance or results, is a bad assumption, the expectation is that the results will differ. Likewise, within the same environment, you should be able to create binaries that give dramatically different performance results. All from the same source code.
Do you have compiler optimizations turned on in your CodeSourcery build, but not in the Realview build?

Any Macro or Technic for Part Optimization?

I am working on lock free structure with g++ compiler. It seems that with -o1 switch, g++ will change the execution order of my code. How can I forbid g++'s optimization on certain part of my code while maintain the optimization to other part? I know I can split it to two files and link them, but it looks ugly.
If you find that gcc changes the order of execution in your code, you should consider using a memory barrier. Just don't assume that volatile variables will protect you from that issue. They will only make sure that in a single thread, the behavior is what the language guarantees, and will always read variables from their memory location to account for changes "invisible" to the executing code. (e.g changes to a variable done by a signal handler).
GCC supports OpenMP since version 4.2. You can use it to create a memory barrier with a special #pragma directive.
A very good insight about locking free code is this PDF by Herb Sutter and Andrei Alexandrescu: C++ and the Perils of Double-Checked Locking
You can use a function attribute "__attribute__ ((optimize 0))" to set the optimization for a single function, or "#pragma GCC optimize" for a block of code. These are only for GCC 4.4, though, I think - check your GCC manual. If they aren't supported, separation of the source is your only option.
I would also say, though, that if your code fails with optimization turned on, it is most likely that your code is just wrong, especially as you're trying to do something that is fundamentally very difficult. The processor will potentially perform reordering on your code (within the limits of sequential consistency) so any re-ordering that you're getting with GCC could potentially occur anyway.