In Eigen, it appears the assertion that checks for matrix sizes for matrix multiplication is disabled by default when using a release mode of the cmake. Is there anyway to keep this assertion when using the release mode ?
I have a matrix product m1*m2 in my code where the m1.cols() != m2.rows(). Both matrix are of dynamic size. When I set(CMAKE_BUILD_TYPE Debug) Eigen will check the size of the matrices and throws an error. When I set(CMAKE_BUILD_TYPE Release) the size check gets disabled and the evaluation m1*m2 goes through, accessing out of bounds memory and the result is almost random.
To make use of assert in release mode, you must undefine the NDEBUG macro. Take a look at the suggestions in this question.
However, I would not recommend this. It would enable assert calls in release mode all over the code you are building. It kind of breaks the contract for the NDEBUG macro. And if you are unlucky it may also affect performance.
If that specific assertion/test is important for you, add it yourself at the application level of your code using something other than assert.
You can predefine eigen_assert before including any Eigen headers, e.g., making it throw (or cerr and terminate).
#define eigen_assert(cond) if(!(cond)) {throw std::runtime_error(#cond);}
#include <Eigen/Core>
int main()
{
assert(false && "not triggered in release mode");
Eigen::Matrix3d A;
Eigen::MatrixXd B(3,2);
A+=B;
}
https://godbolt.org/z/Rqz7BB
Related
I have a simple hello world objective-c lib:
hello.m:
#import <Foundation/Foundation.h>
#import "hello.h"
void sayHello()
{
#ifdef FRENCH
NSString *helloWorld = #"Hello World!\n";
#else
NSString *helloWorld = #"Bonjour Monde!\n";
#endif
NSFileHandle *stdout = [NSFileHandle fileHandleWithStandardOutput];
NSData *strData = [helloWorld dataUsingEncoding: NSASCIIStringEncoding];
[stdout writeData: strData];
}
the hello.h file looks like this:
int main (int argc, const char * argv[]);
int sum(int a, int b);
void sayHello();
This compiles just fine on osx and linux using clang and gcc.
Now my question:
When running a clean compile against hello.m multiple times with clang on ubuntu the generated hello.o can differ. This seems not related to a timestamp, because even after a second or more, the generated .o file can have the same checksum. From my naive point of view, this seems like a complete random/unpredicatable behaviour.
I ran the compilation with the -Sto inspect the generated assembler code. The assembler code also differs (as expected). The diff file of comparing the assembler code can be found here: http://pastebin.com/uY1LERGX
From a first look it just looks like the sorting is different in the assembler code.
This does not happen when compiling it with gcc.
Is there a way to tell clang to generate exactly the same .o file like gcc does?
clang --version:
Ubuntu clang version 3.0-6ubuntu3 (tags/RELEASE_30/final) (based on LLVM 3.0)
The feature when compiler always produce the same code is called Reproducible Builds or deterministic compilation.
One of possible sources of compiler's output instability is ASLR (Address space layout randomization). Sometimes compiler, or some libraries used by it, may read object address and use them, for example as keys of hashes or maps; or when sorting objects according to their addresses. When compiler is iterating over the hash, it will read objects in the order that depends on addresses of objects, and ASLR will place objects in different orders. The effect of such may looks like your reordered symbols (.quads in your diffs)
You can disable Linux ASLR globally with echo 0 | sudo tee /proc/sys/kernel/randomize_va_space. Local way of disabling ASLR in Linux is
setarch `uname -m` -R /bin/bash`
man page of setarch says: -R, "--addr-no-randomize" Disables randomization of the virtual address space (turns on ADDR_NO_RANDOMIZE).
For OS X 10.6 there is DYLD_NO_PIE environment variable (check man dyld, possible usage in bash export DYLD_NO_PIE=1); in 10.7 and newer there is --no_pie build flag to be used in building the LLVM itself or by setting _POSIX_SPAWN_DISABLE_ASLR which should be used in posix_spawnattr_setflags before starting the llvm; or by using in 10.7+ the script http://src.chromium.org/viewvc/chrome/trunk/src/build/mac/change_mach_o_flags.py with --no-pie option to clear PIE flag from llvm binaries (thanks to asan people).
There were some errors in clang and llvm which prevents/prevented them to be completely deterministic, for example:
[cfe-dev] clang: not deterministic anymore? - Nov 3 2009, indeterminism was detected on code from LLVM bug 5355. Author says that indeterminism was present only with -g option enabled
[LLVMdev] Deterministic code generation and llvm::Iterators (2010)
[llvm-commits] Fix some TableGen non-deterministic behavior. (Sep 2012)
r196520 - Fix non-deterministic behavior. - SLPVectorizer was fixed into deterministic only at Dec 5, 2013 (replaced SmallSet with VectorSet)
190793 - TableGen: give asm match classes deterministic order. "TableGen was sorting the entries in some of its internal data structures by pointer." - Sep 16, 2013
LLVM bug 14901 is the case when order of compiler warnings was Non-deterministic (Jan 2013).
The patch from 14901 contains comments about non-deterministic iterating over llvm::DenseMap:
- typedef llvm::DenseMap<const VarDecl *, std::pair<UsesVec*, bool> > UsesMap;
+ typedef std::pair<UsesVec*, bool> MappedType;
+ // Prefer using MapVector to DenseMap, so that iteration order will be
+ // the same as insertion order. This is needed to obtain a deterministic
+ // order of diagnostics when calling flushDiagnostics().
+ typedef llvm::MapVector<const VarDecl *, MappedType> UsesMap;
...
- // FIXME: This iteration order, and thus the resulting diagnostic order,
- // is nondeterministic.
Documentation of LLVM says that there are non-deterministic and deterministic variants of several internal containers, like Map vs MapVector: trunk/docs/ProgrammersManual.rst:
1164 The difference between SetVector and other sets is that the order of iteration
1165 is guaranteed to match the order of insertion into the SetVector. This property
1166 is really important for things like sets of pointers. Because pointer values
1167 are non-deterministic (e.g. vary across runs of the program on different
1168 machines), iterating over the pointers in the set will not be in a well-defined
1169 order.
1170
1171 The drawback of SetVector is that it requires twice as much space as a normal
1172 set and has the sum of constant factors from the set-like container and the
1173 sequential container that it uses. Use it **only** if you need to iterate over
1174 the elements in a deterministic order.
...
1277 StringMap iteratation order, however, is not guaranteed to be deterministic, so
1278 any uses which require that should instead use a std::map.
...
1364 ``MapVector<KeyT,ValueT>`` provides a subset of the DenseMap interface. The
1365 main difference is that the iteration order is guaranteed to be the insertion
1366 order, making it an easy (but somewhat expensive) solution for non-deterministic
1367 iteration over maps of pointers.
It is possible that some authors of LLVM thought that in their code there was no need to save determinism in iteration order. For example, there are comments in ARMTargetStreamer about usage of MapVector for ConstantPools (ARMTargetStreamer.cpp - class AssemblerConstantPools). But how can we sure that all usages of non-deterministic containers like DenseMap will not affect output of compiler? There are tens loops iterating over DenseMap: "DenseMap.*const_iterator" regex in codesearch.debian.net
Your version of LLVM and clang (3.0, from 2011-11-30) is clearly too old to have all determinism enhances from 2012 and 2013 years (some are listed in my answer). You should update your LLVM and Clang, then recheck your program for deterministic compilation, then locate non-determinism in shorter and easier to reproduce examples (e.g. save bc - bitcode - from middle stages), then you can post a bug in LLVM bugzilla.
Try the -S option for clang and gcc during compiling your source. This will generate a .s file in which you can see the assembler code this could give you an idea what the differences on a lower level. Maybe you will realise the output will be the same and your problem shifts from the compiler further down to the linker.
You should report this as a bug; a compiler certainly should be deterministic.
Your guess about the sort order is quite probably correct, in my experience. Most likely the compiler makes an arbitrary decision when two items compare equal (according to whatever measure is significant; they don't have to be actually the same), and that can vary depending on environmental factors, somehow. I've seen this before, in GCC, in which the same compiler compiled for different host OS produced different results; in that case it turned out that the Windows qsort function operated slightly differently to the Linux (glibc) implementation.
That said, it could be something else; compilers aren't supposed to make random decisions, but there plenty of opportunities for arbitrary decisions that might turn out to be unstable, somehow (address space randomization, perhaps?)
How do I preprocess a code base using the clang (or gcc) preprocessor while limiting its text processing to use only #define entries from a single header file?
This is useful generally: imagine you want to preview the immediate result of some macros that you are currently working on… without having all the clutter that results from the mountain of includes inherent to C.
Imagine a case, where there are macros that yield a backward compatible call or an up-to-date one based on feature availability.
#if __has_feature(XYZ)
# define JX_FOO(_o) new_foo(_o)
# define JX_BAR(_o) // nop
...
#else
# define JX_FOO(_o) old_foo(_o)
# define JX_BAR(_o) old_bar(_o)
...
#endif
A concrete example is a collection of Objective-C code that was ported to be ARC-compatible (Automatic Reference Counting) from manual memory management (non-ARC) using a collection of macros (https://github.com/JanX2/google-diff-match-patch-Objective-C/blob/master/JXArcCompatibilityMacros.h) so that it compiles both ways afterwards.
At some point, you want to drop non-ARC support to improve readability and maintainability.
Edit: The basis for getting the preprocessor output is described here: C, Objective-C preprocessor output
Edit 2: If someone has details of how the source-to-source transformation options in Xcode are implemented (Edit > Refactor > Convert To…), that might help.
If you are writing the file from scratch or all the includes are in one place, why not wrap them inside of:
#ifndef MACRO_DEBUG
#include "someLib.h"
/* ... */
#endif
But as I mentioned, this only works when the includes are in consecutive lines and in the best case, you are starting to write the file yourself from scratch so you don't have to go and look for the includes.
This is a perfect case for sed/awk. However there exists an even better tool available for the exact use-case that you mention. Checkout coan.
To pre-process a source file as if the symbol <SYMBOL>is defined,
$ coan source -D<SYMBOL> sourcefile.c
Similarly to pre-process a source file as if the symbol <SYMBOL>is NOT defined,
$ coan source -U<SYMBOL> source.c
This is a bit of a stupid solution, but it works: apparently you can use AppCode’s refactoring to delete uses of a macro.
This limits the solution to OS X, though. It also is slightly tedious, because you have to do this manually for every JX_FOO() and JX_BAR().
On a modern Pentium it is no longer possible to give branching hints to the processor it seems. Assuming that a profiling compiler such as gcc with profile-guided optimization gains information about likely branching behavior, what can it do to produce code that will execute more quickly?
The only option I know of is to move unlikely branches to the end of a function. Is there anything else?
Update.
http://download.intel.com/products/processor/manual/325462.pdf volume 2a, section 2.1.1 says
"Branch hint prefixes (2EH, 3EH) allow a program to give a hint to the processor about the most likely code path for
a branch. Use these prefixes only with conditional branch instructions (Jcc). Other use of branch hint prefixes
and/or other undefined opcodes with Intel 64 or IA-32 instructions is reserved; such use may cause unpredictable
behavior."
I don't know if these actually have any effect however.
On the other hand section 3.4.1. of http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf says
"
Compilers generate code that improves the efficiency of branch prediction in Intel processors. The Intel
C++ Compiler accomplishes this by:
keeping code and data on separate pages
using conditional move instructions to eliminate branches
generating code consistent with the static branch prediction algorithm
inlining where appropriate
unrolling if the number of iterations is predictable
With profile-guided optimization, the compiler can lay out basic blocks to eliminate branches for the most
frequently executed paths of a function or at least improve their predictability. Branch prediction need
not be a concern at the source level. For more information, see Intel C++ Compiler documentation.
"
http://cache-www.intel.com/cd/00/00/40/60/406096_406096.pdf says in "Performance Improvements with PGO "
"
PGO works best for code with many frequently executed branches that are difficult to
predict at compile time. An example is the code with intensive error-checking in which
the error conditions are false most of the time.
The infrequently executed (cold) errorhandling code can be relocated so the branch is rarely predicted incorrectly. Minimizing
cold code interleaved into the frequently executed (hot) code improves instruction cache
behavior."
There are two possible sources for the information you want:
There's Intel 64 and IA-32 Architectures Software Developer's Manual (3 volumes). This is a huge work which has evolved for decades. It's the best reference I know on a lot of subjects, including floating-point. In this case, you want to check volume 2, the instruction set reference.
There's Intel 64 and IA-32 Architectures Optmization Reference Manual. This will tell you in somewhat brief terms what to expect from each microarchitecture.
Now, I don't know what you mean by a "modern Pentium" processor, this is 2013, right? There aren't any Pentiums anymore...
The instruction set does support telling the processor if the branch is expected to be taken or not taken by a prefix to the conditional branch instructions (such as JC, JZ, etc). See volume 2A of (1), section 2.1.1 (of the version I have) Instruction Prefixes. There is the 2E and 3E prefixes for not taken and taken respectively.
As to whether these prefixes actually have any effect, if we can get that information, it will be on Optimization Reference Manual, the section for the microarchitecture you want (and I'm sure it won't be the Pentium).
Apart from using those, there is an entire section on the Optimization Reference Manual on that subject, that's section 3.4.1 (of the version I have).
It makes no sense to reproduce that here, since you can download the manual for free.
Briefly:
Eliminate branches by using conditional instructions (CMOV, SETcc),
Consider the static prediction algorithm (3.4.1.3),
Inlining
Loop unrolling
Also, some compilers, GCC, for instance, even when CMOV is not possible, often perform bitwise arithmetic to select one of two distinct things computed, thus avoiding branches. It does this particularly with SSE instructions when vectorizing loops.
Basically, the static conditions are:
Unconditional branches are predicted to be taken (... kind of expectable...)
Indirect branches are predicted not to be taken (because of a data dependency)
Backward conditionals are predicted to be taken (good for loops)
Forward conditionals are predicted not to be taken
You probably want to read the entire section 3.4.1.
If it's clear that a loop is rarely entered, or that it normally iterates very few times, then the compiler might avoid unrolling the loop, as doing so can add a lot of harmful complexity to handle edge conditions (an odd-number iterations, etc.). Vectorisation, in particular, should be avoided in such cases.
The compiler might rearrange nested tests, so that the one that most frequently results in a short-cut can be used to avoid performing a test on something with a 50% pass rate.
Register allocation can be optimised to avoid having a rarely-used block force register spill in the common case.
These are just some examples. I'm sure there are others I haven't thought of.
Off the top of my head, you have two options.
Option #1: Inform the compiler of the hints and let the compiler organize the code appropriately. For example, GCC supports the following ...
__builtin_expect((long)!!(x), 1L) /* GNU C to indicate that <x> will likely be TRUE */
__builtin_expect((long)!!(x), 0L) /* GNU C to indicate that <x> will likely be FALSE */
If you put them in macro form such as ...
#if <some condition to indicate support>
#define LIKELY(x) __builtin_expect((long)!!(x), 1L)
#define UNLIKELY(x) __builtin_expect((long)!!(x), 0L)
#else
#define LIKELY(x) (x)
#define UNLIKELY(x) (x)
#endif
... you can now use them as ...
if (LIKELY (x != 0)) {
/* DO SOMETHING */
} else {
/* DO SOMETHING ELSE */
}
This leaves the compiler free to organize the branches according to static branch prediction algorithms, and/or if the processor and compiler support it, to use instructions that indicate which branch is more likely to be taken.
Option #2: Use math to avoid branching.
if (a < b)
y = C;
else
y = D;
This could be re-written as ...
x = -(a < b); /* x = -1 if a < b, x = 0 if a >= b */
x &= (C - D); /* x = C - D if a < b, x = 0 if a >= b */
x += D; /* x = C if a < b, x = D if a >= b */
Hope this helps.
It can make the fall-through (ie the case where a branch is not taken) the most used path. That has two big effects:
only 1 branch can be taken per clock, or on some processors even per 2 clocks, so if there are any other branches (there usually are, most code that matters is in a loop), a taken branch is bad news, a non-taken branch less so.
when the branch predictor is wrong, the code that it does have to execute is more likely to be in the code cache (or µop cache, where applicable). If it wasn't, that would have been a double-whammy of restarting the pipeline and waiting for a cache miss. This is less of an issue in most loops, since both sides of the branch are likely to be in the cache, but it comes into play in big loops and other code.
It can also decide whether to do if-conversion based on better data than a heuristic guess. If-conversions may seem like "always a good idea", but they're not, they're only "often a good idea". If the branch in the branching implementation is very well-predicted, the if-converted code can well be slower.
I have written an if-clause that checks whether I should break the program for debugging or not:
if (a < 0) {
a = a;
}
a should not become negative, but I have found that it does, and I want to break for debugging to see why it becomes negative if that happens, hence I have written this if-clause. On the line a = a; I have set a breakpoint, which is supposed to stop the program if it enters the if-clause. The thing is that the line doesn't do anything (which is necessary in order not to mess anything up), so the line is optimized away and the breakpoint ends up after the if-clause. This trick usually works but apparently the compiler wasn't very found of it this time.
The language is C++, and I'm compiling with qmake (a Qt tool) and mingw.
My question is, how can I prevent the compiler from optimizing away lines of code when I have breakpoints set on them? Or is there some other way to conditionally break the program for debugging?
One possibility is to call an I/O function. In Java, one could write:
if (a < 0) {
System.out.printf("");
}
Similarly, in C/C++, one could write:
if (a < 0) {
printf("");
}
Even though the function call is effectively a no-op, the compiler doesn't know that, and is unlikely to optimize the call away.
Or is there some other way to conditionally break the program for debugging?
Many modern IDE allow one to set conditional breakpoints: Visual Studio, Eclipse.
I usually put a printf (or cout, or whatever is appropriate for the language that you are using) here so that I can set a breakpoint, e.g.
if (a < 0) {
printf("a < 0 !\n"); // <<< set breakpoint here
}
If it's C or C++, simply defining a as volatile should help.
I defined a NO_OP() macro which doesn't do anything and doesn't require the file that's using it to include any header files:
#define NO_OP() {float f = 0; if (f != 0) exit(0);}
I don't know if the compiler will be able to optimize this macro away, but it works for me with MinGW.
It’s not portable, but with MSVC I use __asm nop (surrounded by #ifndef NDEBUG…#endif if the code is likely to remain in place for a while) to insert a literal no-op that I know the compiler won’t touch.
I have an idea about what it is. My question is :-
1.) If i program my code which is amenable to Tail Call optimization(Last statement in a function[recursive function] being a function call only, no other operation there) then do i need to set any optimization level so that compiler does TCO. In what mode of optimization will compiler perform TCO, optimizer for space or time.
2.) How do i find out which all compilers (MSVC, gcc, ARM-RVCT) does support TCO
3.) Assuming some compiler does TCO, we enable it then, What is the way to find out that the compielr has actually done TCO? Will Code size, tell it or Cycles taken to execute it will tell that or both?
-AD
Most compilers support TCO, it is a relatively old technique. As far as how to enable it with a specific compiler, check the documentation for your compilers. gcc will enable the optimization at every optimization level except -O1, I think the specific option for this is -foptimize-sibling-calls. As far as how to tell how/if the compiler is doing TCO, look at the assembler output (gcc -S for example) or disassemble the object code.
Optimization is Compiler specific. Consult the documentation for the various optimization flags for them
You will find that in the Compilers documentation too. If you are curious, you can write a tail recursive function and pass it a big argument, and lookout for a stack-overflow. (tho checking the generated assembler might be a better choice, if you understand the code generated.)
You just use the debugger, and look out the address of function arguments/local variables. If they increase/decrease on each logical frame that the debugger shows (or if it actually only shows one frame, even though you did several calls), you know whether TCO was done or wasn't done.
If you want your compiler to do tail call optimization, just check either
a) the doc of the compiler at which optimization level it will be performed or
b) check the asm, if the function will call itself (you dont even need big asm knowledge to spot the just the symbol of the function again)
If you really really want tail recursion my question would be:
Why dont you perform the tail call removal yourself? It means nothing else than removing the recursion, and if its removable then its not only possible by the compiler on low level but also on algorithmic level by you, that you can programm it direct into your code (it means nothing else than go for a loop instead of a call to yourself).
One way to determine if tail-call is happening is to see if you can force a stack overflow. The following program does not produce a stack overflow using VC++ 2005 Express Edition and, even though its results exceed the capacity of long double rather quickly, you can tell that all of the iterations are being processed when TCO is happening:
/* FibTail.c 0.00 UTF-8 dh:2008-11-23
* --|----1----|----2----|----3----|----4----|----5----|----6----|----*
*
* Demonstrate Fibonacci computation by tail call to see whether it is
* is eliminated through compiler optimization.
*/
#include <stdio.h>
long double fibcycle(long double f0, long double f1, unsigned i)
{ /* accumulate successive fib(n-i) values by tail calls */
if (i == 0) return f1;
return fibcycle(f1, f0+f1, --i);
}
long double fib(unsigned n)
{ /* the basic fib(n) setup and return. */
return fibcycle(1.0, 0.0, n);
}
int main(int argc, char* argv[ ])
{ /* compute some fibs until something breaks */
int i;
printf("\n i fib(i)\n\n");
for (i = 1; i > 0; i+=i)
{ /* Do for powers of 2 until i flips negative
or stack overflow, whichever comes first */
printf("%12d %30.20LG \n", i, fib((unsigned) i) );
}
printf("\n\n");
return 0;
}
Notice, however, that the simplifications to make a pure tail-call in fibcycle is tantamount to figuring out an interative version that doesn't do a tail-call at all (and will work with or without TCO in the compiler.
It might be interesting to experiment in order to see how well the TCO can find optimizations that are not already near-optimal and easily replaced by iterations.