GCC -mthumb against -marm - optimization

I am working on performance optimizations of ARM C/C++ code, compiled with GCC. CPU is Tegra 3.
As I know flags -mthumb means generating old 16-bit Thumb instructions. On different tests, I have 10-15% performance increase with -marm against -mthumb.
Is -mthumb used only for compatibility and for performance, while -marm is generally better?
I am asking because android-cmake used -mthumb in Release mode and -marm in Debug. This is very confusing for me.

Thumb is not the older instruction-set, but in fact the newer one. The current revision being Thumb-2, which is a mixed 16/32-bit instruction set. The Thumb1 instruction set was a compressed version of the original ARM instruction set. The CPU would fetch the the instruction, decompress it into ARM and then process it. These days (ARMv7 and above), Thumb-2 is preferred for everything but performance critical or system code. For example, GCC will by default generate Thumb2 for ARMv7 (Like your Tegra3), as the higher code density provided by the 16/32-bit ISA allows for better icache utilization. But this is something which is very hard to measure in a normal benchmark, because most benchmarks will fit into the L1 icache anyway.
For more information check the Wikipedia site: http://en.wikipedia.org/wiki/ARM_architecture#Thumb

ARM is a 32 bit instruction so has more bits to do more things in a single instruction while THUMB with only 16 bits might have to split the same functionality between 2 instructions. Based on the assumption that non-memory instructions took more or less the same time, fewer instructions mean faster code. There were also some things that just couldn't be done with THUMB code.
The idea was then that ARM would be used for performance critical functionality while THUMB (which fits 2 instructions into a 32 bit word) would be used to minimize storage space of programs.
As CPU memory caching became more critical, having more instructions in the icache was a bigger determinant of speed than functional density per instruction. This meant that THUMB code became faster than the equivalent ARM code. ARM (corp) therefore created THUMB32 which is a variable length instruction that incorporates most ARM functionality. THUMB32 should in most cases give you denser as well as faster code due to better caching.

Related

Table of optimization levels of the GNU C++ compiler g++, accurate?

Although I know each and every program is a different scenario, I have a rather specific question considering the below table.
Optimization levels of the GNU C++ compiler g++
Ox WHAT IS BEING OPTIMIZED EXEC CODE MEM COMP
TIME SIZE TIME
------------------------------------------------------------------------------
O0 optimize for compilation time | + + - -
O1 optimize for code size and execution time #1 | - - + +
O2 optimize for code size and execution time #2 | -- 0 + ++
O3 optimize for code size and execution time #3 | --- 0 + +++
Ofast O3 with fast none accurate math calculations | --- 0 + +++
Os optimize for code size | 0 -- 0 ++
+increase ++increase more +++increase even more -reduce --reduce more ---reduce even more
I am using version 8.2, though this should be a generic table taken from here and re-written into a plain text.
My question is, if it that can be trusted, I don't know that web site, so I better ask the professionals here. So, is the table more or less accurate?
Your table is grossly accurate.
Notice that GCC has zillions of optimization options. Some weird optimization passes are not even enabled at -O3 (but GCC has several hundreds of optimization passes).
But there is no guarantee than an -O3 optimization always give code which runs faster than the same code compiled with -O2. This is generally the case, but not always. You could find pathological (or just) weird C source code which, when compiled with -O3, gives a slightly slower binary code than the same C source code compiled with -O2. For example, -O3 is likely to unroll loops "better" -at least "more"- than -O2, but some code might perform worse if some particular loop in it is more unrolled. The phoronix website and others are benchmarking GCC and are observing such phenomenon.
Be aware that optimization is an art, it is in general an intractable or undecidable problem, and that current processors are so complex that there is no exact and complete model of their performance (think of cache, branch predictors, pipeline, out-of-order execution). Beside, the detailed micro-architecture of x86 processors is obviously not public (you cannot get the VHDL or chip layout of Intel or AMD chips). Hence, the -march= option to GCC also matters (the same binary code is not always good on both AMD & Intel chips, or even on several brands of Intel processors). So, if compiling code on the same machine that runs it, passing -march=native in addition of -O2 or -O3 is recommended.
People paid by Intel and by AMD are actively contributing to GCC, but they are not allowed to share all the knowledge they have internally about Intel or AMD chips. They are allowed to share (with the GPLv3+ license of GCC) the source code they are contributing to the GCC compiler. Probably engineers from AMD are observing the Intel-contributed GCC code to guess micro-architectural details of Intel chips, and vice versa.
And Intel or AMD interests obviously include making GCC working well with their proprietary chips. That corporate interests justify paying (both at Intel and at AMD) several highly qualified compiler engineers contributing full time to GCC.
In practice, I observed that both AMD and Intel engineers are "playing the game" of open source: they routinely contribute GCC code which also improves their competitor's performance. This is more a social & ethical & economical issue than a technical one.
PS. You can find many papers and books on the economics of open source.

Are modern GPUs considered to be RISC based or CISC based?

I'm trying to figure out if modern GPUs have a reduced instruction set, or a complex instruction set.
Wikipedia says that it's not the size of the instruction set, rather how many cycles it takes to complete an instruction.
In RISC processors, each instruction can be completed in one cycle.
In CISC processors, it takes several cycles to complete some instructions.
I'm trying to figure out what the case is for modern GPUs.
If you mean Nvidia then it's clearly RISC as its most GPUs don't even have integer division and modulo operations in hardware, only shifts, bitwise operations and 3 arithmetic operations (addition, subtraction, multiplication) are used to implement those 2. I can't find example but this question (modular arithmetic on the gpu) shows that mod uses
procedure which implements some sophisticated algorithm (about 50 instructions or even more)
Even NVVM (Nvidia virtual machine) language called PTX uses more operations some of which are "baked" into a bunch of simpler operations anyway after conversion to one of native languages (there are different versions of such languages because of nature of GPUs and their generations/families but those are just called SASS altogether).
You can see here all the available operations along with description on each which are yet very short and not very clear (especially if you don't have background in machine level programming like knowing that "scaled" means 1 left shifted to operand just as in x86's "FSCALE" or "Scale factor" etc.):
https://docs.nvidia.com/cuda/cuda-binary-utilities/index.html#instruction-set-ref
If you mean AMDGPU then there is a lot of instructions and it's not so clear because some sources tell that they switched from VLIW to something just when Southern Islands GPUs were released.
RISC instruction set : the load/store unit is independent from other units so basically for loading and storing specific instruction are used
CISC insruction set : the ad/store unit in embedded in the instrction execution routine , therfore the instruction is more comlex than RISC instruction because CISC instruction beside the operation it will perform the load and store stage and this require more transistor logic to be used for one ibstruction
The goal of CISC was to take common coding patterns and accelerate them in hardware. You see this in the constant extensions to the base architecture. See Intel's MMX and SSE, and AMD's 3DNow!, etc. https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions This also makes for good marketing, as you need to upgrade to the new processor to accelerate the newest common tasks, and keeps coders busy constantly translating their code patterns to the new extensions.
The goal of RISC was the opposite. It tried to perform few base functions as fast as possible. The coder then needs to continue to break down their common coding tasks to those simple instructions (although high-level programming languages and code packages/libraries accomplish this for you). RISC continues to survive as the architecture for ARM processors. See: https://en.wikipedia.org/wiki/Reduced_instruction_set_computer
I note that GPUs are similar to the RISC philosophy, in that the goal is to perform as many relatively simple computations as fast as possible. The move toward deep learning created a need for training millions of relatively simple parameters, hence the move back toward a highly parallel, relatively simple architecture. Having both philosophies implemented inside your computer is the best of both worlds.

Optimal way to move memory in x86 and ARM?

I am interested knowing the best approach for bulk memory copies on an x86 architecture. I realize this depends on machine-specific characteristics. The main target is typical desktop machines made in the last 4-5 years.
I know that in the old days MOVSD with REPE was nominally the fastest approach because you could move 4 bytes at a time, but I have read that nowadays MOVSB is just as fast and is simpler to write, so you may as well do a byte move and just forget about the complexities of a 4-byte move.
A surrounding question is whether MOVxx instructions are worth it at all. If the CPU can run so much faster than the memory bus, then maybe it is pointless to use a CISC move and you may as well use a plain MOV. This would be most attractive because then I could use the same algorithms on other processor architectures like ARM. This brings up the analogous question of whether ARM's specialized instructions for bulk memory moves (which are totally different than Intels) are worth it or not.
Note: I have read section 3.7.6 in the Intel Optimization Reference Manual so I am familiar with the basics. I am hoping someone can relate practical experience in the area beyond what is in this manual.
Modern Intel and AMD processors have optimisations on REP MOVSB that make it copy entire cache lines at a time if it can, making it the best (may not be fastest, but pretty close) method of copying bulk data.
As for ARM, it depends on the architecture version, but in general using an unrolled loop would be the most efficient.

Optimisation , Compilers and Its Effects

(i) If a Program is optimised for one CPU class (e.g. Multi-Core Core i7)
by compiling the Code on the same , then will its performance
be at sub-optimal level on other CPUs from older generations (e.g. Pentium 4)
... Optimizing may prove harmful for performance on other CPUs..?
(ii)For optimization, compilers may use x86 extensions (like SSE 4) which are
not available in older CPUs.... so ,Is there a fall-back to some non-extensions
based routine on older CPUs..?
(iii)Is Intel C++ Compiler is more optimizing than Visual C++ Compiler or GCC..
(iv) Will a truly Multi-Core Threaded application will perform effeciently on a
older CPUs (like Pentium III or 4)..?
Compiling on a platform does not mean optimizing for this platform. (maybe it's just bad wording in your question.)
In all compilers I've used, optimizing for platform X does not affect the instruction set, only how it is used, e.g. optimizing for i7 does not enable SSE2 instructions.
Also, optimizers in most cases avoid "pessimizing" non-optimized platforms, e.g. when optimizing for i7, typically a small improvement on i7 will not not be chosen if it means a major hit for another common platform.
It also depends in the performance differences in the instruction sets - my impression is that they've become much less in the last decade (but I haven't delved to deep lately - might be wrong for the latest generations). Also consider that optimizations make a notable difference only in few places.
To illustrate possible options for an optimizer, consider the following methods to implement a switch statement:
sequence if (x==c) goto label
range check and jump table
binary search
combination of the above
the "best" algorithm depends on the relative cost of comparisons, jumps by fixed offsets and jumps to an address read from memory. They don't differ much on modern platforms, but even small differences can create a preference for one or other implementation.
It is probably true that optimising code for execution on CPU X will make that code less optimal on CPU Y than the same code optimised for execution on CPU Y. Probably.
Probably not.
Impossible to generalise. You have to test your code and come to your own conclusions.
Probably not.
For every argument about why X should be faster than Y under some set of conditions (choice of compiler, choice of CPU, choice of optimisation flags for compilation) some clever SOer will find a counter-argument, for every example a counter-example. When the rubber meets the road the only recourse you have is to test and measure. If you want to know whether compiler X is 'better' than compiler Y first define what you mean by better, then run a lot of experiments, then analyse the results.
I) If you did not tell the compiler which CPU type to favor, the odds are that it will be slightly sub-optimal on all CPUs. On the other hand, if you let the compiler know to optimize for your specific type of CPU, then it can definitely be sub-optimal on other CPU types.
II) No (for Intel and MS at least). If you tell the compiler to compile with SSE4, it will feel safe using SSE4 anywhere in the code without testing. It becomes your responsibility to ensure that your platform is capable of executing SSE4 instructions, otherwise your program will crash. You might want to compile two libraries and load the proper one. An alternative to compiling for SSE4 (or any other instruction set) is to use intrinsics, these will check internally for the best performing set of instructions (at the cost of a slight overhead). Note that I am not talking about instruction instrinsics here (those are specific to an instruction set), but intrinsic functions.
III) That is a whole other discussion in itself. It changes with every version, and may be different for different programs. So the only solution here is to test. Just a note though; Intel compilers are known not to compile well for running on anything other than Intel (e.g.: intrinsic functions may not recognize the instruction set of a AMD or Via CPU).
IV) If we ignore the on-die efficiencies of newer CPUs and the obvious architecture differences, then yes it may perform as well on older CPU. Multi-Core processing is not dependent per se on the CPU type. But the performance is VERY dependent on the machine architecture (e.g.: memory bandwidth, NUMA, chip-to-chip bus), and differences in the Multi-Core communication (e.g.: cache coherency, bus locking mechanism, shared cache). All this makes it impossible to compare newer and older CPU efficiencies in MP, but that is not what you are asking I believe. So on the whole, a MP program made for newer CPUs, should not be using less efficiently the MP aspects of older CPUs. Or in other words, just tweaking the MP aspects of a program specifically for an older CPU will not do much. Obviously you could rewrite your algorithm to more efficiently use a specific CPU (e.g.: A shared cache may permit you to use an algorithm that exchanges more data between working threads, but this algo will die on a system with no shared cache, full bus lock and low memory latency/bandwidth), but it involves a lot more than just MP related tweaks.
(1) Not only is it possible but it has been documented on pretty much every generation of x86 processor. Go back to the 8088 and work your way forward, every generation. Clock for clock the newer processor was slower for the current mainstream applications and operating systems (including Linux). The 32 to 64 bit transition is not helping, more cores and less clock speed is making it even worse. And this is true going backward as well for the same reason.
(2) Bank on your binaries failing or crashing. Sometimes you get lucky, most of the time you dont. There are new instructions yes, and to support them would probably mean trap for an undefined instruction and have a software emulation of that instruction which would be horribly slow and the lack of demand for it means it is probably not well done or just not there. Optimization can use new instructions but more than that the bulk of the optimization that I am guessing you are talking about has to do with reordering the instructions so that the various pipelines do not stall. So you arrange them to be fast on one generation processor they will be slower on another because in the x86 family the cores change too much. AMD had a good run there for a while as they would make the same code just run faster instead of trying to invent new processors that eventually would be faster when the software caught up. No longer true both amd and intel are struggling to just keep chips running without crashing.
(3) Generally, yes. For example gcc is a horrible compiler, one size fits all fits no one well, it can never and will never be any good at optimizing. For example gcc 4.x code is slower on gcc 3.x code for the same processor (yes all of this is subjective, it all depends on the specific application being compiled). The in house compilers I have used were leaps and bounds ahead of the cheap or free ones (I am not limiting myself to x86 here). Are they worth the price though? That is the question.
In general because of the horrible new programming languages and gobs of memory, storage, layers of caching, software engineering skills are at an all time low. Which means the pool of engineers capable of making a good compiler much less a good optimizing compiler decreases with time, this has been going on for at least 10 years. So even the in house compilers are degrading with time, or they just have their employees to work on and contribute to the open source tools instead having an in house tool. Also the tools the hardware engineers use are degrading for the same reason, so we now have processors that we hope to just run without crashing and not so much try to optimize for. There are so many bugs and chip variations that most of the compiler work is avoiding the bugs. Bottom line, gcc has singlehandedly destroyed the compiler world.
(4) See (2) above. Don't bank on it. Your operating system that you want to run this on will likely not install on the older processor anyway, saving you the pain. For the same reason that the binaries optimized for your pentium III ran slower on your Pentium 4 and vice versa. Code written to work well on multi core processors will run slower on single core processors than if you had optimized the same application for a single core processor.
The root of the problem is the x86 instruction set is dreadful. So many far superior instructions sets have come along that do not require hardware tricks to make them faster every generation. But the wintel machine created two monopolies and the others couldnt penetrate the market. My friends keep reminding me that these x86 machines are microcoded such that you really dont see the instruction set inside. Which angers me even more that the horrible isa is just an interpretation layer. It is kinda like using Java. The problems you have outlined in your questions will continue so long as intel stays on top, if the replacement does not become the monopoly then we will be stuck forever in the Java model where you are one side or the other of a common denominator, either you emulate the common platform on your specific hardware, or you are writing apps and compiling to the common platform.

GCC vs Greenhills on ARM

I'm interested in any comparisons between GCC and Greenhills C compiler with regard to memory footprint of generated code specifically on ARM platforms.
Are there any benchmarks or comparisons for these compilers? Has anyone had any experience here that they'd like to share?
You should note that the Green Hills EULA explicitly prohibits licensees from publishing benchmarks.
What you can do is obtain an evaluation licence from Green Hills and perform your own benchmarking. That would be more trustworthy and representative in any case since you could test it on real production code. And in any case the benchmark for say an ARM7 may be very different to that of a Cortex-M3 for example, so any available published results may not be comparing like-for-like, and may not be representative of your platform.
Beware also that I have experienced widely varying results from different binary distributions of GCC even when ostensibly from the same code base version (specifically with software-floating-point performance. So you are still probably best off trusting your own evaluation results only.
You might consider Keil and IAR at the same time which also have evaluation versions. Why are you considering just these two? People generally go with Green Hills when they have big budgets and can benefit from the RTOS integration and debugger capabilities available from a single source; any benefit you might get from using the compiler alone is unlikely to justify the license costs IMO.
I have not seen any benchmarks but from my experience the two compilers is very similar code size and the code generated.
Green Hills has lots of documentation and support if you want to reduce your memory foot print, with GCC it get lonely very fast once your off the beaten track. Green Hills also support compressed executable images that is great if you have limited FLASH but plenty RAM.
I have also used custom runtime and C libraries (This can save you some more space) with both compilers but you will need to do some digging to get info for GCC but the Green hills you can get some of the stuff via a wizard that generates the build file.