Example crash logs from nonatomic variable access on multiple threads? - objective-c

What sorts of crashes can a lack of threadsafety cause? (Sort of a follow-up to Under what circumstances are atomic properties useful?).
Can anyone reproduce a crash example (even if sporadically) with a test case?
I'm trying to sort out from a large number of crash logs what percent of them are related to a particular integer variable being accessed from multiple threads. (Yes, I've already confirmed that this access can happen in my iOS app, the question is just how often does it happen.)
Obviously improper variable access can lead to unexpected effects and more crashes downstream, but I'm interested only in those related directly to the access/mutation of the variable (since downstream effects won't generally be the same from one app to another). Also, only interested in integer (or other completely immutable) variables.
I'm seeking any example error codes / exceptions / crash logs from which I can extract keywords and regular expressions. Even better would be a test app or unit test case which can reproduce a crash with high probability. I tried a simple example in a unit test but couldn't seem to cause a crash.

While this approach sounds so logical, race conditions manifest themselves in manifold ways and defy simple characterization within crash logs. Problems arising from data races are particularly vexing because they often result in heisenbugs. And sometimes it doesn't even crash but rather just produces incorrect results (which may or may not result in other problems).
I know this isn't the answer you're looking for, but while I can see the appeal of your plan, it is unlikely to be a productive exercise.
If it's a question of prioritizing the thread-safety fix versus other issues, there is no simple answer. The app is just going to be susceptible to crashes and other unpredictable behavior until this data race issue is resolved. But we can't reliably forecast what percentage of your current crash logs will be addressed. IMHO, given that crashing is the quickest way to lose a user-base, this thread-safety fix strikes me as a very high priority.
In terms of identifying the unsynchronized access, the thread sanitizer, as outlined in this WWDC video, is a excellent tool.

Related

Assertions in ABAP

Over the years I've written code in a variety of languages and environments, but one constant seemed to be the consensus on the use of assertions. As I understand it, they are there for the development process when you want to identify "impossible" errors and other situations to which your first reaction would be "that can't be right" and which cannot be handled gracefully, leaving the system in a state where it has no choice but to terminate. Assertions are easy to understand and quick to code but due to their fail-fast nature are unsuitable for development code. Ideally, assertions are used to discover all development bugs and then removed or turned off when shipping the code. Input or program states that are wrong, but possible (and expected to occur) should instead be handled gracefully via exceptions or other error handling techniques.
However, none of this seems to hold true for writing ABAP code for SAP. I've just spent the better part of an hour trying to track down the precise location where an assert was giving me an unintelligible error. This turned out to be five levels down in standard SAP code, which is apparently riddled with ASSERT statements. I now know that a certain variable identifying a table IS NOT INITIAL while its accompanying variable identifying a field is.
This tells me nothing. The Web Dynpro component running this code actually "catches" this assert, showing me a generic error message, which only serves to prevent the debugger from launching when the assert is tripped.
My question therefore is what the guidelines or best practices are for the use of assertions in ABAP. Is this SAP writing bad code? Is it an accepted practice to fill your custom code with asserts and leave them in when shipping the code? If so, how would we go about handling these asserts in runtime so that the application doesn't crash and burn while still being able to identify the cause of the error?
The guidelines and best practices are virtually the same in ABAP development as in any other language. Assertion should be used as internal guidance checks only, exceptions for regular input validation errors and other stuff. It might be sensible to leave the assertions in the code - after all, you'd probably rather want your program to crash in a controlled fashion than continue in an unforeseen way and probably damage some critical data in the process without anyone noticing. Take a look at checkpoint groups if you don't want your program to abort in a production environment - but in my opinion: What's the use of a sanity check (as a last line of defense) if it's disabled in the environment where it matters most?
Of course I'm assuming that the input is validated properly (so that crashes are prevented) and that all APIs are used according to the intended use and documentation. Unfortunately - as with every other programming language - it's up to the developer to live up to these standards.

Why testing should not continue when defect has critical priority?

I am trying to understand a testing theory and few times I have seen following, which is related to priority:
1. Critical : Bugs at this level must be resolved as soon as possible. Testing
should not progress until a known critical defect is fixed.
or
1.Immediate fix, blocks further testing, very visible
But to be honest, I dont know why. Why could I not test e.g. another part of the system until the critical-priority bug has been fixed?
I expect the reason behind this design is that these bugs intend to mark massive interdependencies of the critical parts with other parts of your system.
For example, if you have an application that deals with money pervasively, you may have a test at the base of your system that determines whether currencies are represented accurately. If they aren't, and you keep testing other parts of your system, you may find other numerical tests that report wrong results elsewhere. Unfortunately, you may also waste time looking for an error in your math or your algorithm, and spend time fixing those numerical tests while, in fact, they're doing the Right thing with the wrong currency.
If it's possible to test something else, then the bug is not priority/severity 1. Simple as that. :-) Priority 1 bugs tend to be things like "code does not compile" or "the system crashes at bootup and there is no workaround".

When you write your code, do you deal with errors proactively or reactively?

In other words, do you spend time anticipating errors and writing code to get around these potential issues, or do you write the code as you see fit and then work through any errors on an issue by issue basis?
I've been thinking a lot about this lately and I'm very much a reactive person. I write my code, give it a whirl, go back correct error and repeat until application works as expected. However a friend of mine offered that he spends time thinking how each line is interpreted and fixes errors before they occur.
I must point out that re-active is pure PRE-live. I definitely make sure my application is working before it goes live.
There should always be a balance.
Too many error checking is slow and leads to garbage code. Not enough error checking makes your program crash on edge cases which is not very good to discover after having it shipped.
So you decide how reliable some piece of code should be and implement error checking accordingly. Some test utility can be not very reliable - less error checking. A COM server meant to be used by a third party search service in deep background should be super reliable - much more error checking.
I think asking this in isolation is kinda weird, and very subjective, however there are obviously a bunch of techniques that permit you to do each. I tend to use these two:
Test-driven development (this would seem to be proactive)
Strong, static typing (reactive, but part of a tight iterative development cycle, as in, it's enforced by my ML compiler, and I compile a lot)
Very occasionally I swerve into the world of formal verification of programs. That's definitely "reactive", but if you think a little more up-front, it tends to make the verification easier.
I must also say that I value a lot of up-front thought in programming. The easiest way to avoid bugs is to not write them in the first place. Sometimes it's inevitable, but often a little more time spent thinking about the problem can lead to better-quality solutions, and then the rest can be taken care of using the kinds of automated methods I talked about above.
I usually ask myself a bunch of what-ifs when coding, like
The user clicks the button, what if they didn't select a date?
The user is typing in the search box, what if they try to type html in there?
My label text depends on a value from a shared drive, what if it's not mapped?
and so on. By doing this I've found that when the application does go live, there are a ton fewer errors and I can focus on fixing more obscure bugs instead of correcting conditions that should have been in place to begin with.
I live by a simple principle when considering error-handling: garbage in, garbage out. If you don't want any garbage (e.g. invalid input) messing up your software, you have to find all the points in your software where it can get in and handle it. Of course, the more complicated your software is, the harder it is to find every point of entry, but I feel that the more you do up front the less reactive you will need to be later on.
I advocate the proactive approach.
I try to write the code in that style which results in maintainable and reliable code
I use the defensive programming techniques to prevent stupid errors in code due to my loss of attention and similar
I design the database model according to the fortress principle, SQL code checking for results after each singular operation
I think of potential problems that can happen with that part of the code and I account for that. Not for every possibility but for major ones I can think of right now.
This usually results in software operating rather smoothly. At times it even surprises me but that was the intended goal, so here we are.
IMHO, the word "Error" (or its loose synonym "bug") itself means that it is a program behavior that was not foreseen.
I usually try to design with all possible scenarios in mind. Of course, it is usually not possible to think of all possible cases. But thinking through and allowing for as many scenarios as possible is usually better than just getting something working as soon as possible. This saves a lot of time and effort debugging and redesigning the code. I often sit down with pen and paper for even the smallest of programing tasks before actually typing any code into my editor.
As I said, this will not eliminate all errors. For me it pays off many times over in terms of time spent debugging. Another benefit is that it results in a more solid and maintainable design with fewer bugfixing hacks and special cases added on later. But in any case, you will have to do a lot of debugging after the code is done.
This does not apply when all you want is a mockup or rapid prototype. Also practical constraints such as deadlines often makes a thorough evaluation difficult or impossible.
What kind of programming? It's impossible to answer this in any general way. (It's like asking "do you wear a helmet when playing?" -- well, playing what?)
At work, I'm working on a database-backed website. The requirements are strict, and if I don't anticipate how users will screw it up, I'm going to get a call at some odd hour of the day to fix it.
At home, I'm working on a program ... I don't even know what it'll do yet. I can't deal with 'errors' because I don't know what 'an error' is in this context, because I don't know what correct behavior is going to be. The entire purpose of the program can and frequently does change on a timescale of minutes to hours, so even a couple minutes spent thinking about errors this early is a complete waste of time. (It's even worse than browsing SO, since error-handling adds lines of code.)
I guess the only general answer is "I do what makes sense in terms of saving time in the long term", which is, after all, the whole reason to use machines to do work for us.

Is it feasible to track or measure the cause of bugs or is this just asking for unintended consequences?

Is there a method for tracking or measuring the cause of bugs which won't result in unintended consequences from development team members? We recently added the ability to assign the cause of a bug in our tracking system. Examples of causes include: bad code, missed code, incomplete requirements, missing requirements, incomplete testing etc. I was not a proponent of this as I could see it leading to unintended behaviors from the dev team. To date this field has been hidden from team members and not actively used.
Now we are in the middle of a project where we have a larger than normal number of bugs and this type of information would be good to have in order to better understand where we went wrong and where we can make improvements in the future (or adjustments now). In order to get good data on the cause of the bugs we would need to open this field up for input by dev and qa team members and I'm worried that will drive bad behaviors. For example people may not want to fix a defect they didn't create because they'll feel it reflects poorly on their performance, or people might waste time arguing over the classification of a defect for similar reasons.
Has anyone found a mechanism to do this type of tracking without driving bad behaviors? Is it possible to expect useful data from team members if we explain to the team the reasoning behind the data (not to drive individual performance metrics, but project success metrics)? Is there another, better way to do this type of thing (a more ad-hoc post mortem or open discussion on the issues perhaps)?
A lot of version control packages have things like svn blame. This is not a direct metric for tracking a bug, but it can tell you who checked in changes to a release that has a major bug in it.
There's also programs like http://www.bugzilla.org/ that help track things over time.
But as far as really digging into why bugs exist, yes, it's definitely worth looking into, though I can't give a standard metric for collecting that information. There are a number of reasons why a system might be very buggy:
Poorly written specs
Rushed timelines
Low-skill programming
Bad morale
Lack of beta or QA testing
Lack of preparing software so that it is even feasible to beta or QA test
Poor ratio of time spent cleaning up bugs vs getting new functionality out
Poor ratio of time spent making bug-free enhancements vs getting functionality out
An exceeding complex system that is easy to break
A changing environment that is outside the code base, such as the machine administration
Blame for mistakes affecting programmer compensation or promotion
That's just to name a few... If too many bugs is a big problem, then management and lead programmers and any other stake-holders in the whole process need to sit down and discuss the issue.
High bug rates can be a symptom of a schedule which is too rushed or inflexible. Switching to a zero defect approach may help. Fix all bugs before working on new code.
Assigning reasons is a good technique to see if you have a problem area. Typical metrics I have seen and encountered are an even split between:
Specification errors (missing, incorrec, etc.)
Application bugs (inccorrect code, missing code, bad data, etc.)
Incorrect tests / no error (generally incorrect expectations, or specifications not yet implemented)
Reveiwing and verifying the defect causes can be useful.

Can compiler optimization introduce bugs?

Today I had a discussion with a friend of mine and we debated for a couple of hours about "compiler optimization".
I defended the point that sometimes, a compiler optimization might introduce bugs or at least, undesired behavior.
My friend totally disagreed, saying that "compilers are built by smart people and do smart things" and thus, can never go wrong.
He didn't convince me at all, but I have to admit I lack of real-life examples to strengthen my point.
Who is right here? If I am, do you have any real-life example where a compiler optimization produced a bug in the resulting software? If I'm mistaking, should I stop programming and learn fishing instead?
Compiler optimizations can introduce bugs or undesirable behaviour. That's why you can turn them off.
One example: a compiler can optimize the read/write access to a memory location, doing things like eliminating duplicate reads or duplicate writes, or re-ordering certain operations. If the memory location in question is only used by a single thread and is actually memory, that may be ok. But if the memory location is a hardware device IO register, then re-ordering or eliminating writes may be completely wrong. In this situation you normally have to write code knowing that the compiler might "optimize" it, and thus knowing that the naive approach doesn't work.
Update: As Adam Robinson pointed out in a comment, the scenario I describe above is more of a programming error than an optimizer error. But the point I was trying to illustrate is that some programs, which are otherwise correct, combined with some optimizations, which otherwise work properly, can introduce bugs in the program when they are combined together. In some cases the language specification says "You must do things this way because these kinds of optimizations may occur and your program will fail", in which case it's a bug in the code. But sometimes a compiler has a (usually optional) optimization feature that can generate incorrect code because the compiler is trying too hard to optimize the code or can't detect that the optimization is inappropriate. In this case the programmer must know when it is safe to turn on the optimization in question.
Another example:
The linux kernel had a bug where a potentially NULL pointer was being dereferenced before a test for that pointer being null. However, in some cases it was possible to map memory to address zero, thus allowing the dereferencing to succeed. The compiler, upon noticing that the pointer was dereferenced, assumed that it couldn't be NULL, then removed the NULL test later and all the code in that branch. This introduced a security vulnerability into the code, as the function would proceed to use an invalid pointer containing attacker-supplied data. For cases where the pointer was legitimately null and the memory wasn't mapped to address zero, the kernel would still OOPS as before. So prior to optimization the code contained one bug; after it contained two, and one of them allowed a local root exploit.
CERT has a presentation called "Dangerous Optimizations and the Loss of Causality" by Robert C. Seacord which lists a lot of optimizations that introduce (or expose) bugs in programs. It discusses the various kinds of optimizations that are possible, from "doing what the hardware does" to "trap all possible undefined behaviour" to "do anything that's not disallowed".
Some examples of code that's perfectly fine until an aggressively-optimizing compiler gets its hands on it:
Checking for overflow
// fails because the overflow test gets removed
if (ptr + len < ptr || ptr + len > max) return EINVAL;
Using overflow artithmetic at all:
// The compiler optimizes this to an infinite loop
for (i = 1; i > 0; i += i) ++j;
Clearing memory of sensitive information:
// the compiler can remove these "useless writes"
memset(password_buffer, 0, sizeof(password_buffer));
The problem here is that compilers have, for decades, been less aggressive in optimization, and so generations of C programmers learn and understand things like fixed-size twos complement addition and how it overflows. Then the C language standard is amended by compiler developers, and the subtle rules change, despite the hardware not changing. The C language spec is a contract between the developers and compilers, but the terms of the agreement are subject to change over time and not everyone understands every detail, or agrees that the details are even sensible.
This is why most compilers offer flags to turn off (or turn on) optimizations. Is your program written with the understanding that integers might overflow? Then you should turn off overflow optimizations, because they can introduce bugs. Does your program strictly avoid aliasing pointers? Then you can turn on the optimizations that assume pointers are never aliased. Does your program try to clear memory to avoid leaking information? Oh, in that case you're out of luck: you either need to turn off dead-code-removal or you need to know, ahead of time, that your compiler is going to eliminate your "dead" code, and use some work-around for it.
When a bug goes away by disabling optimizations, most of the time it's still your fault
I am responsible for a commercial app, written mostly in C++ - started with VC5, ported to VC6 early, now successfully ported to VC2008. It grew to over 1 Million lines in the last 10 years.
In that time I could confirm a single code generation bug thast occured when agressive optimizations where enabled.
So why am I complaining? Because in the same time, there were dozens of bugs that made me doubt the compiler - but it turned out to be my insufficient understanding of the C++ standard. The standard makes room for optimizations the compiler may or may not make use of.
Over the years on different forums, I've seen many posts blaming the compiler, ultimately turning out to be bugs in the original code. No doubt many of them obscure bugs that need a detailed understanding of concepts used in the standard, but source code bugs nonetheless.
Why I reply so late: stop blaming the compiler before you have confirmed it's actually the compiler's fault.
Compiler (and runtime) optimization can certainly introduce undesired behaviour - but it at least should only happen if you're relying on unspecified behaviour (or indeed making incorrect assumptions about well-specified behaviour).
Now beyond that, of course compilers can have bugs in them. Some of those may be around optimisations, and the implications could be very subtle - indeed they're likely to be, as obvious bugs are more likely to be fixed.
Assuming you include JITs as compilers, I've seen bugs in released versions of both the .NET JIT and the Hotspot JVM (I don't have details at the moment, unfortunately) which were reproducible in particularly odd situations. Whether they were due to particular optimisations or not, I don't know.
To combine the other posts:
Compilers do occasionally have bugs in their code, like most software. The "smart people" argument is completely irrelevant to this, as NASA satellites and other apps built by smart people also have bugs. The coding that does optimization is different coding from that which doesn't, so if the bug happens to be in the optimizer then indeed your optimized code may contain errors while your non-optimized code will not.
As Mr. Shiny and New pointed out, it's possible for code that is naive with regard to concurrency and/or timing issues to run satisfactorily without optimization yet fail with optimization as this may change the timing of execution. You could blame such a problem on the source code, but if it will only manifest when optimized, some people might blame optimization.
Just one example: a few days ago, someone discovered that gcc 4.5 with the option -foptimize-sibling-calls (which is implied by -O2) produces an Emacs executable that segfaults on startup.
This has apparently been fixed since.
I've never heard of or used a compiler whose directives could not alter the behaviour of a program. Generally this is a good thing, but it does require you to read the manual.
AND I had a recent situation where a compiler directive 'removed' a bug. Of course, the bug is really still there but I have a temporary workaround until I fix the program properly.
Yes. A good example is the double-checked locking pattern. In C++ there is no way to safely implement double-checked locking because the compiler can re-order instructions in ways that make sense in a single-threaded system but not in a multi-threaded one. A full discussion can be found at http://www.aristeia.com/Papers/DDJ_Jul_Aug_2004_revised.pdf
Is it likely? Not in a major product, but it's certainly possible. Compiler optimizations are generated code; no matter where code comes from (you write it or something generates it), it can contain errors.
I encountered this a few times with a newer compiler building old code. The old code would work but relied on undefined behavior in some cases, like improperly defined / cast operator overload. It would work in VS2003 or VS2005 debug build, but in release it would crash.
Opening up the assembly generated it was clear that the compiler had just removed 80% of the functionality of the function in question. Rewriting the code to not use undefined behavior cleared it up.
More obvious example: VS2008 vs GCC
Declared:
Function foo( const type & tp );
Called:
foo( foo2() );
where foo2() returns an object of class type;
Tends to crash in GCC because the object isn't allocated on the stack in this case, but VS does some optimization to get around this and it will probably work.
Aliasing can cause problems with certain optimizations, which is why compilers have an option to disable those optimizations. From Wikipedia:
To enable such optimizations in a predictable manner, the ISO standard for the C programming language (including its newer C99 edition) specifies that it is illegal (with some exceptions) for pointers of different types to reference the same memory location. This rule, known as "strict aliasing", allows impressive increases in performance[citation needed], but has been known to break some otherwise valid code. Several software projects intentionally violate this portion of the C99 standard. For example, Python 2.x did so to implement reference counting,[1] and required changes to the basic object structs in Python 3 to enable this optimisation. The Linux kernel does this because strict aliasing causes problems with optimization of inlined code.[2] In such cases, when compiled with gcc, the option -fno-strict-aliasing is invoked to prevent unwanted or invalid optimizations that could produce incorrect code.
Yes, compiler optimizations can be dangerous. Usually hard real-time software projects forbids optimizations for this very reason. Anyway, do you know of any software with no bugs?
Aggressive optimizations may cache or even do strange assumptions with your variables. The problem is not only with the stability of your code, but also they can fool your debugger. I have seen several times a debugger failing to represent the memory contents because some optimizations retained a variable value within the registers of the micro
The very same thing can happen to your code. The optimization puts a variable into a register and do not write to the variable until it has finished. Now imagine how different things can be if your code has pointers to variables in your stack and it has several threads
It's theoretically possible, sure. But if you don't trust the tools to do what they are supposed to do, why use them? But right away, anyone arguing from the position of
"compilers are built by smart people
and do smart things" and thus, can
never go wrong.
is making a foolish argument.
So, until you have reason to believe that a compiler is doing so, why posture about it?
I certainly agree that it's silly to say the because compilers are written by "smart people" that they are therefore infallible. Smart people designed the Hindenberg and the Tacoma Narrows Bridge, too. Even if it's true that compiler-writers are among the smartest programmers out there, it's also true that compilers are among the most complex programs out there. Of course they have bugs.
On the other hand, experience tells us that the reliability of commercial compilers is very high. I've had many many times that someone told me that the reason why is program doesn't work MUST be because of a bug in the compiler because he has checked it very carefully and he is sure that it is 100% correct ... and then we find that in fact the program has an error and not the compiler. I'm trying to think of times that I've personally run across something that I was truly sure was an error in the compiler, and I can only recall one example.
So in general: Trust your compiler. But are they ever wrong? Sure.
It can happen. It has even affected Linux.
As I recall, early Delphi 1 had a bug where the results of Min and Max were reversed. There was also an obscure bug with some floating point values only when the floating point value was used within a dll. Admittedly, it has been more than a decade, so my memory may be a bit fuzzy.
I have had a problem in .NET 3.5 if you build with optimization, add another variable to a method which is named similarly to an existing variable of the same type in the same scope then one of the two (new or old variable) will not be valid at runtime and all references to the invalid variable are replaced with references to the other.
So, for example, if I have abcd of MyCustomClass type and I have abdc of MyCustomClass type and I set abcd.a=5 and abdc.a=7 then both variables will have property a=7. To fix the issue both variables should be removed, the program compiled (hopefully without errors) then they should be re-added.
I think I have run into this problem a few times with .NET 4.0 and C# when doing Silverlight applications also. At my last job we ran into the problem quite often in C++. It might have been because the compilations took 15 minutes so we would only build the libraries we needed, but sometimes the optimized code was exactly the same as the previous build even though new code had been added and no build errors had been reported.
Yes, code optimizers are built by smart people. They are also very complicated so having bugs is common. I suggest fully testing any optimized release of a large product. Usually limited use products are not worth a full release, but they should still be generally tested to make sure they perform their common tasks correctly.
Compiler optimization can reveal (or activate) dormant (or hidden) bugs in your code. There may be a bug in your C++ code that you don't know of, that you just don't see it. In that case, it is a hidden or dormant bug, because that branch of the code is not executed [enough number of times].
The likelihood of a bug in your code is much bigger (thousands of times more) than a bug in the compiler's code: Because the compilers are tested extensively. By TDD plus practically by all people who have use them since their release!). So it is virtually unlikely that a bug is discovered by you and not discovered by literally hundreds of thousands of times it is used by other people.
A dormant bug or hidden bug is just a bug that is not revealed itself to the programmer yet. People who can claim that their C++ code does not have (hidden) bugs are very rare. It requires C++ knowledge (very few can claim for that) and extensive testing of the code. It is not just about the programmer, but about the code itself (the style of development). Being bug-prone is in the character of the code (how rigorously it is tested) or/and the programmer (how disciplined is in test and how well knows C++ and programming).
Security+Concurrency bugs: This is even worse if we include concurrency and security as bugs. But after all, these 'are' bugs. Writing a code that is in the first place bug-free in terms of concurrency and security is almost impossible. That's why there is always already a bug in the code, which can be revealed (or forgotten) in compiler optimization.
More, and more aggressive optimizations could be enabled if the program you compile has a good testing suite. Then it is possible to run that suite and be somewhat more sure the program operates correctly. Also, you can prepare your own tests that match closely that do you plan to do in production.
It is also true that any large program may have (and probably indeed has) some bugs independently on which switches do you use to compile it.
I work on a large engineering application, and every now and then we see release only crashes and other problems reported by clients. Our code has 37 files (out of around
6000) where we have this at the top of the file, to turn off optimization to fix such crashes:
#pragma optimize( "", off)
(We use Microsoft Visual C++ native, 2015, but it is true for just about any compiler, except maybe Intel Fortran 2016 update 2 where we have not yet turned of any optimizations.)
If you search through the Microsoft Visual Studio feedback site you can find some optimization bugs there as well. We occasionally log some of ours (if you can reproduce it easily enough with a small section of code and you are willing to take the time) and they do get fixed, but sadly others get introduced again. smiles
Compilers are programs written by people, and any big program has bugs, trust me on that. The compiler optimization options most certainly has bugs and turning on optimization can certainly introduce bugs in your program.
Everything that you can possibly imagine doing with or to a program will introduce bugs.
Because of exhaustive testing and the relative simplicity of actual C++ code (C++ has under 100 keywords / operators) compiler bugs are relatively rare. Bad programming style often is the only thing encounters them. And usually the compiler will crash or produce an internal compiler error instead. The only exception to this rule is GCC. GCC, especially older versions, had a lot of experimental optimizations enabled in O3 and sometimes even the other O levels. GCC also targets so many backends that this leaves more room for bugs in their intermediate representation.
I had a problem with .net 4 yesterday with something that looks like...
double x=0.4;
if(x<0.5) { below5(); } else { above5(); }
And it would call above5(); But if I actually use x somewhere, it would call below5();
double x=0.4;
if(x<0.5) { below5(); } else { System.Console.Write(x); above5(); }
Not the exact same code but similar.