What is the difference between -DNDEBUG and -DNS_BLOCK_ASSERTIONS flags - objective-c

I've tried FauxPas source code analyzer and it showed me that I am missing -DNDEBUG flag, with next description:
This argument disables the C standard library assertion macro (as
defined in assert.h).
Though, when I check my build settings I found very similar by description flag -DNS_BLOCK_ASSERTIONS=1.
So I wonder now. Do I really need the flag that FauxPas suggests or I am fine with the one I have?

NDEBUG disables assert(), which is part of the C standard library. NS_BLOCK_ASSERTIONS disables NSAssert() which is part of Foundation. You generally will require both if you have both kinds of assertions in your code.

Related

Warnings issued during compile phase of stan

I get two warnings at the cpp compile phase with all the stan programs that I submit.
C:/Larry/R/win-library/3.4/BH/include/boost/config/compiler/gcc.hpp:186:0: warning: "BOOST_NO_CXX11_RVALUE_REFERENCES" redefined # define BOOST_NO_CXX11_RVALUE_REFERENCES
and
cc1plus.`exe: warning: unrecognized command line option "-Wno-ignored-attributes"
Since I don't get these warnings in submitting other Rcpp programs, I suspect that they are generated in the course of gc++ compiling of the Stan program. They seem to be harmless, but they are disconcerting. I see many other messages on Stack Overfkiw that include these warnings, but I have not found any explanations of them, nor ways to correct what is producing these warnings.
I am running R 3.4.3 and RStudio 1.1.383 in Windows 10 with Rtools 3.4.0.1964. I'd be grateful to anyone that will explain these warnings to me and what I have to do to correct them.
Don't worry about either of those.
The first is telling you that it redefines that Boost thing, but it is redefining it to what it was already set to.
The second is avoidable if you take -Wno-ignored-attributes out of the CXXFLAGS line of your ~/.R/Makevars file. It applies to a different compiler or version or something and is being ignored.

Howto enable stronger optimization builds

I am trying to build PETSc and have problems to enable optimization. Without specifying, PETSc always creates a debugging build, but I can turn that off with passing --with-debugging=0 to cmake. However, this only enables -O1 by default, but as my application is extreme time consuming and very time critical, I want to have at least -O2. I can't find an option except --CFLAGS, which works, but always appends options to the end, so -O1 would override my -O2.
I greped for "-O" to set the flag manually, this gave me a million lines, mostly from the configure.log file and doesn't help.
Does anybody know the file where to set the flag, or a workaround like ...another option that disables the usage of the last specified -O#, but enables the strongest or first?
Citing PETSc' install instructions:
Configure defaults to building PETSc in debug mode. One can switch to
using optimzed mode with the toggle option --with-debugging [defaults
to debug enabled]. Additionally one can specify more suitable
optimization flags with the options COPTFLAGS, FOPTFLAGS, CXXOPTFLAGS.
./configure --with-cc=gcc --with-fc=gfortran --with-debugging=0
COPTFLAGS='-O3 -march=p4 -mtune=p4' FOPTFLAGS='-O3 -qarch=p4
-qtune=p4'

ICU Transliterator doesn't work without -O2 flag with g++

I have been having problems with ICU's rule-based Transliterator which turned out to be caused by a missing -O2 flag on debug builds using g++ 4.7.1. It was working fine when I did a release build (which had -O2), but when I built my project without that flag for debugging, the Transliterator object would never be created properly.
Transliterator* t = Transliterator::createFromRules(id, rules, UTRANS_FORWARD, parseError, status);
Without optimisation, t is assigned a null pointer and status is set to 32767, which translates to BOGUS UErrorCode when run through ICU's u_errorName().
The first thing I tried was to remove the -g debug flag from my build, but that made no difference to what createFromRules() returned. Only when I added -O2 did it create the Transliterator object.
Testing indicates that -O1, -O2 and -O3 all work as expected, and only -O0 causes this to happen.
Can someone explain why this should be the case?
You should not have different results with/without debugging. Can you create a small test case including rules, give the ICU version and OS/platform, and file a ticket? 4.7.1 was released 2 months ago. Could be a compiler bug (ICU has a long history of finding those!!) or latent bug. You can link the bug to this question and vice versa.

why is my code performing poorly when built with Realview tools but better with Codesourcery?

I have a C project which was previously being built with Codesourcery's gnu tool chain. Recently it was converted to use Realview's armcc compiler but the performance that we are getting with Realview tools is very poor compared to when it is compiled with gnu tools. Shouldnt it be opposite case i.e it should give better performance when compiled with Realview's tools? What am I missing here. How can I improve the performance with Realview's tools?
Also I have noticed that if I run the binary produced by Realview Tools with Lauterbach it crashes but If I run it using Realview ICE it runs fine.
UPDATE 1
Realview Command line:
armcc -c --diag_style=ide
--depend_format=unix_escaped --no_depend_system_headers --no_unaligned_access --c99 --arm_only --debug --gnu --cpu=ARM1136J-S --fpu=SoftVFP --apcs=/nointerwork -O3 -Otime
GNU GCC command line:
arm-none-eabi-gcc -mcpu=arm1136jf-s
-mlittle-endian -msoft-float -O3 -Wall
I am using Realview Tools version 4.1 and GCC version 4.4.1
UPDATE 2
Lauterbach issue has been solved. It was being caused because of Semihosting as the semihosting SWI was not being handled in Lauterbach environment. Retargeting the C library to avoid Semihosting did the trick and now my program runs successfully with Lauterbach as well as Realview ICE. But the performance issue is as it is.
Since you have optimisations on, and in some environments it crashes, it may be that your code uses undefined behaviour or other latent error. Such behaviour can change with optimisation, or even break altogether.
I suggest that you try both tool-chains without optimisation, and make sure that the warning level is set high, and you fix them all. GCC is far better that armcc at error checking so is a reasonable static analysis check. If the code builds clean it is more likely to work and may be easier for the optimiser to handle.
Have you tried removing the '--no_unaligned_access'? ARM11s can typically do unaligned access (if enabled in the startup code) and forcing the compiler/library to not do them may be slowing down your code.
The current version of RVCT says of '--fpu=SoftVFP':
In previous releases of RVCT, if you
specified --fpu=softvfp and a CPU with
implicit VFP hardware, the linker
chose a library that implemented the
software floating-point calls using
VFP instructions. This is no longer
the case. If you require this legacy
behavior, use --fpu=softvfp+vfp.
This suggests to me that if you perhaps have an old version of RVCT the behaviour will be to use software floating point regardless of the presence of hardware floating point. While in the GNU version -msoft-float will use hardware floating point instructions when an FPU is available.
So what version of RVCT are you using?
Either way I suggest that you remove the --fpu option since the compiler will make an implicit appropriate selection based on the --cpu option selected. You also need to correct the CPU selection, your RVCT option says --cpu=ARM1136J-S not ARM1136FJ-S as you told GCC. This will no doubt prevent the compiler from generating VFP instructions, since you told it it has no VFP.
The same source code can produce dramatically different binaries due to factors like. Different compilers (llvm vs gcc, gcc 4 vs gcc3, etc). Different versions of the same compiler. Different compiler options if the same compiler. Optimization (on either compiler). Compiled for release or debug (or whatever terms you want to use, the binaries are quite different). When going embedded, you add in the complication of a bootloader or rom monitor (debugger) and things like that. Then add to that the host side tools that talk to the rom monitor or compiled in debugger. Despite being a far better compiler than gcc, arm compilers were infected with the assumption that the binaries would always be run on top of their rom monitor. I want to remember that by the time rvct became their primary compiler that assumption was on its way out, but I have not really used their tools since then.
The bottom line is there are a handful of major factors that can affect the differences between binaries that can and will lead to a different experience. Assuming that you will get the same performance or results, is a bad assumption, the expectation is that the results will differ. Likewise, within the same environment, you should be able to create binaries that give dramatically different performance results. All from the same source code.
Do you have compiler optimizations turned on in your CodeSourcery build, but not in the Realview build?

Any Macro or Technic for Part Optimization?

I am working on lock free structure with g++ compiler. It seems that with -o1 switch, g++ will change the execution order of my code. How can I forbid g++'s optimization on certain part of my code while maintain the optimization to other part? I know I can split it to two files and link them, but it looks ugly.
If you find that gcc changes the order of execution in your code, you should consider using a memory barrier. Just don't assume that volatile variables will protect you from that issue. They will only make sure that in a single thread, the behavior is what the language guarantees, and will always read variables from their memory location to account for changes "invisible" to the executing code. (e.g changes to a variable done by a signal handler).
GCC supports OpenMP since version 4.2. You can use it to create a memory barrier with a special #pragma directive.
A very good insight about locking free code is this PDF by Herb Sutter and Andrei Alexandrescu: C++ and the Perils of Double-Checked Locking
You can use a function attribute "__attribute__ ((optimize 0))" to set the optimization for a single function, or "#pragma GCC optimize" for a block of code. These are only for GCC 4.4, though, I think - check your GCC manual. If they aren't supported, separation of the source is your only option.
I would also say, though, that if your code fails with optimization turned on, it is most likely that your code is just wrong, especially as you're trying to do something that is fundamentally very difficult. The processor will potentially perform reordering on your code (within the limits of sequential consistency) so any re-ordering that you're getting with GCC could potentially occur anyway.