Debug vs release modes on the iPhone - iphone-sdk-3.0

Can anybody explain debug and release modes in the iPhone SDK? What is their importance and how are they distinguished?

A debug mode is when the compiler keeps debugging information for use with the debugger. It also doesn't optimize the code, as optimizing can make debugging tricky.
Release mode takes out the debugging symbols and turns on optimization. It's generally done when "releasing" the product, since you want it to run as fast as possible.

Related

Finding performance-related release-only bugs in Cocoa

I am currently developing a project and I'm working in debug during development.
Typically, I'm just working in release mode before I am releasing the next update to check if everything works as expected.
During the last release tests, I found a bug that just appears in release mode.
As soon as I set the optimization level to 1 or higher (I'm using LLVM 4.2), the bug starts to appear.
I don't have any clue what the error could be related to.
Does anyone have an idea what I could do to find the reason of the error?
Instruments maybe? Didn't find a mode that would fit.
It's a performance-related error and I can't seem to find it with breakpoints.
My first guess is that the main thread is running full so that not all events can be processed in time.
If that was it, why would that not happen in debug mode?

just before release - how to check memory usage?

Our app is good to go and everything seems to work just fine. We have tried to manage the memory as much as we can and we have no crashes at all.
Now before release, I want to check if there are leaks, or some problems that may cause my app to be rejected by Apple.
What's Apple's policy on memory leaks? Are even small ones not allowed? If some are allowed, then what's the limit?
What software/tool should I use to check memory management/leaks, to be sure that if it gives me good results, my app will be approved by Apple, if simply not crashing is not enough?
Is there a guide about one of this tools ?
Is checking my app in all iOS versions in the iOS Simulator enough? I have only 1 iPhone 4 :)
What you can do:
Run "Analyze" (MenuBar -> Product -> Analyze or SHIFT+CMD+B)
This checks your code for possible leaks and dead stores
Run "Profile" (MenuBar -> Product -> Profile or SHIFT+I)
This runs Instruments which allows you to track your allocations and possible leaks at runtime.
I don't think there is a specific policy about leaks - but a program that leak will crash - and Apple do reject apps due to crashes.
Otherwise I do agree with MatzeLoCal - run Analyze, and fix everything - and run a lot of profiling if you suspect there to be any issues.
In addition to running Analyze...
There is a tool in Xcode called Instruments that allows you to search specifically for memory leaks in your application. Choose the Leaks option when prompted when you open Instruments and then mess around in your app while recording to see any memory leaks that may be hiding in your app.
Here is the official Apple documentation for using Instruments: https://developer.apple.com/library/ios/documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/FindingLeakedMemory.html

Help diagnosing crash in Cocoa framework - possible memory leak?

I'm currently migrating the Fragaria framework from a GC-only environment to GC being supported. After the work was done (or what I thought had to be done to make it work) I was able to run the examples that come with the framework without any problems and Instruments didn't show any major memory leaks.
I included non-GC Fragaria in my non-GC application and it crashes as soon as I place the cursor on it. To be honest the usage pattern is different from the examples as I'm embedding it in an instance of NSViewController instead of NSDocument.
Can you give me some tips on how to debug this? I'm a bit lost on where to proceed now.
First thing to do is Build and Analyze the code, then fix any problems it finds.
Next, try running with Zombie detection enabled (google NSZombie).
Finally, each crash's stacktrace should give you a pretty good idea where things have gone off the rails.

Why am I having memory leaks using Instruments on the device but not on the simulator

I am getting close to finishing the release of my application and are trying to use Instruments to fix any memory leaks.
How come that I can spot one memory leak when using Instruments and my device but not when I am using the iPhone simulator? I understand that this is a high-level question, but I don't think posting any code would help anyways (quite a bit of code...).
And is it possible to get instruments to point to the source code where it think the leak is? I can do it when using the simulator, but it seems like it doesn't work when using it for the device (objects are represented by the address (I assume) while running it for the simulator it sees what object it is, setup issues?)
Thanks in advance!
Regards,
Niklas
Update: Could it have something to do with that OSX is having automatic garbage collection but iOS doesn't?
Trust only the device. That's what your user will use to run your application.
Don't trust the simulator.
As a demonstration of this, I just intentionally added a leak to a project. The leak was not detected while in the simulator, but showed up as expected on the device.
The simulator is just that: a simulator. It can be useful to work faster, but is never a replacement of the device.
Once Instruments showed you a leaked object, you can double click on it. It will show the part of your code responsible for the leak. This works for the simulator and the device.
When you compile for the device, make sure you are in debug mode (and that the settings for this mode kept all your symbols).
Some more tip that you might find useful:
For a more fluid session, disable the "Automatic Leaks Checking", and manually press the "Check for Leaks Now" button when appropriate.
The "Build and Analyse" command will do a fantastic job to help you find leaks. It's not (totally) magical, so it won't find all leaks. For example, iVars leaked won't be identified. But for the scope of a method, it's just awesome.
I highly recommend to activate the "Run Static Analyser" flag in your build settings (or only for the Release mode if you have a slow to compile machine).
If you want more info about how to use Instruments to find leaks, read this Apple doc: Instruments User Guide: Built-in Instruments and Instruments User Guide: Viewing and Analysing Trace Data > Looking for Memory Leaks
You can also watch the video of the WWDC related sessions.
If you still don't understand where your leak come from, it's time to (re)read the Memory Management Programming Guide.
Thank you for wanting to ship a leak-free application. With iOS 4, it's now more important than ever.
If you haven't already take a look at the handy "Build and Analyze" option in the build menu. It will run the static analyzer which generally does a great job. If nothing turns up with that you should could some time reviewing the WWDC session videos on Instruments.
There is no substitute for profiling on hardware and with the debugger and instruments connected you can get everything you would in a simulator context.

Separate 'debug' and 'release' builds?

I think it's better to release the version of the software which your developers actually tested; I therefore tend to delete the 'debug' target from the project/makefile, so that there's only one version that can be built (and tested, and debugged, and released).
For a similar reason, I don't use 'assertions' (see also Are assertions always bad? ...).
One person there argued that the reason for a 'debug' version is that it's easier to debug: but, I counter-argued that you may eventually want to support and debug whatever it is you released, and so you need to build a release which you can if necessary debug ... this may mean enabling debug symbols, and disabling some optimizations, even in the 'release' build.
Someone else said that "this is such a bad idea"; it's a policy I evolved some years ago, having been burned by:
Some developers' testing their debug but not release versions
Some developers' writing bugs which show up only in the release version
The company's releasing the release version after inadequate testing (is it ever entirely adequate?)
Being called on to debug the release version
Since then I've seen more than one other development shop follow this practice (i.e. not have separate debug and release builds).
What's your policy?
Having separate debug and release builds is a good idea, because it does make development easier.
But debug builds should be for development only, not for testing. You test release builds only. And you don't use developers to test those builds, you use testers.
It's a simple policy that gives the best of both worlds, IMO.
Edit: In response to a comment, I think it's obvious that debug and release builds (can) generate different code. Think "-DDEBUG" vs. "-DNDEBUG", and "#if defined(DEBUG)", etc.
So it's vital that you test the code that you end up shipping. If you do generate different code in debug and release builds, that means testing twice - regardless of whether or not it's tested by the same person.
Debug symbols are not that big an issue, however. Always build with debugging symbols, keep a copy of the unstripped binary, but release a stripped binary. As long as you tag each binary with a build number somehow, you should always be able to identify which unstripped binary corresponds to the stripped binary that you have to debug...
How to strip binaries and load symbols in your debugger from an external source is platform-dependent.
This might be minor, but it adds up to what others have said here. One of the advantages of having QA test release builds is that over time the built in debugging and logging capabilities of your software will advance due to the needs of developers who need to figure out why things are going wrong in QA.
The more the developers need to debug release builds, the better tools you'll have later when customers start having issues.
Of course, no reason for developers to work on release builds as part of the development cycle.
Also, I don't know any software company that has long enough cycles to afford the overhead of switching QA from debug to release builds halfway through a version's testing period. Having to do a full QA cycle is something that all too often happens pretty rarely.
Our policy is to have developers work on Debug builds, but EVERYONE else (QA, BAs, sales etc) runs the release version.
Yesterday I had to fix a bug that only showed up in the release build it, was obvious what was happening simply BECAUSE it only showed up in release
It's first one here in this shop, and I've been here 18 months or so.
Where things get hairy is when the Release build does different things to the debug build - Yes, I have been to Hell and seen this in some very old, very ropy production code.
I see no reason why not to have both if the only difference between the configurations are debug symbols and optimisations.
so you need to build a release which
you can if necessary debug ... this
may mean enabling debug symbols, and
disabling some optimizations, even in
the 'release' build.
Ummm... it sounds like you're doing a debug build to me... right?
The part where you went wrong is this statement:
I think it's better to release the
version of the software which your
developers actually tested
Developers don't test code. Tests test code.
Your unit tests should test ALL build configurations. Do not make your developers work with one hand tied behind their back - let them use all the debugging tools they have at there disposal. A Debug build is one of these.
Regarding asserts: the use of assertions greatly depends on whether or not you program by contract. If you do, then assertions merely check the contract in a debug build.
As per my answer in the linked thread, we also use the same build for debug and release for very similar reasons. The 10%-20% performance gains from the optimiser tend to be very minor when compared to manual optimisations at algorithm level. A single build removes many potential bugs. Specifically;
Uninitialised variables and small buffer overflows may end up with very different results in debug and optimised release builds.
Even with the symbolic information available, debugging an optimised release can be difficult as the object doesn't match the source, e.g. variables may have been optimised out and code may have been re-arranged. Thus bugs reported in tested release builds can be more difficult, and hence time-consuming, to track down.
Having compared unoptimised and optimised builds under automated regression tests, the performance gains provided by the optimisation don't provide enough extra value to have two builds in my case. It is may be worth noting that the software that I develop is very CPU hungry (e.g. creating and manipulating large surface models).
When developing with Java, I hate non-debug versions. When an exception is thrown, you get no line information which makes it hard or even impossible to track bugs down. Also, the runtime difference between debug and non-debug is around 5% with Java 5 or later, so this is really no issue and with todays hard disks, size doesn't matter anymore.
On the plus side using debug versions:
Stack traces contain all the information you need
Variables can be examined
If you have a problem in production, you can simply attach to the running process without having to stop the server first to install a debug version.
You won't get caught by clever optimization bugs
The build is more simple (just one artifact)
Developers work with debug builds, QA and everyone else uses the release version, which we call "production". The main advantage to this is that in the debug build, we can add lots of extra code and assertions. Some objects contain extra pieces of information that have no use except when viewing code in the debugger. Some objects validate themselves periodically to make sure that all the state information is consistent. These things make the debug version much slower, but they have helped us find no end of bugs that would have been hell to find in the production build.
As I said, all of our QA and performance testing uses production builds, and we do occasionally run into problems that show up in production but not in debug. But they're relatively rare, and as a developer, the advantages of debugging a debug build rather than a production build far outweigh that problem.
I think it depends on the project size and what type of build system and testing that you are using.
If you have an automated build system in place, and it's simple to run unit and functional tests on a given build, then you should never have any problems with multiple build types.
I've always subscribed to the "Ship what you debug, so you can debug what you ship" approach, for all the reasons you list in your question.
In my opinion this discussion missing a very important point:
It really depends upon what kind of project it is!
If you create a native (C/C++) project you will in effect be forced to create debug builds, simply because compiler optimizations can make debugging near impossible in some cases.
If you create web applications you might rather wish to simply have one build (although "build" is rather misleading for some web applications) that can enable logging features during runtime.
Although a native C++ project and a PHP web application are obviously not all kinds of project that exist, I hope my point got across.
P.S.: When developing for C#, you run into a border case since although using a debug build disables compiler optimizations, in my experience you will not run into nearly as much differences as with C++
here we develop in debug mode and do all unit testing in release mode. we are a small shop with just a few (under 12) application to support ranging from Classic ASP, ASP.Net, VB.Net, and C#.
We also have a dedicated person to handle all testing, debugged problems are thrown back to the developers.
We always build both, never even considered not doing so. Enabling debug options increases your code size and slows performance, possibly not an issue with your type of software when testing but what if the customer is running your code plus 5 other apps...
The issues with testing can be sorted out by using automated testing so you're release build can be effortlessly tested when you think you're ready to release. The failure of your developers or company to properly test release builds is not a failure in the idea of release and debug builds but in your developers and or company.
On your last point, I have never been called upon to debug a release build, just to fix it...
It's a tradeoff. Given that CPU cycles are cheap and getting cheaper while human cycles remain expensive, it makes a lot of sense to maintain only a single version of a large, complex program -- the debug(gable) version.
Always using assertions always is a safer policy than never using them. If producing separate debug and release versions, re-enable whatever #defined symbols you need to guarantee that assertions are enabled in the release version also.
I think the tradeoff is simple: yes, with only a release build, you really test what's actually being shipped. On the other hand, you do pay a price in ease of debugging for your developers and/or performance for the user, so it's up to you to check both cases.
On most medium- to large-size projects, ease of debugging will ensure a better product for your users in the end.
See this What's your most controversial programming opinion?
quote:
Opinion: Never ever have different
code between "debug" and "release"
builds
The main reason being that release
code almost never gets tested. Better
to have the same code running in test
as it is in the wild.
By removing the "debug target", you are forcing developers to debug on the release version of the software. What that probaly means in practice is two things:
1) "release builds" will have optimizations disabled (otherwised developers can't use a debugger)
2) No builds will have special PREPROCESSOR macros altering their execution.
So what you will really be doing is merging the release and debug configurations rather than eliminating just the "debug" mode.
I personally have done this with iOS development with no ill-effects. The amount of time spent in our written code is less than 1% of what is really happening, so the optimizations were not significant contributors. In this case, they really did seem to cause an increase in bugs, but even if they didn't, the idea of testing one way, then giving to QA with different code introduces just one more factor to consider with issues.
On the other hand, there are cases where the optimizations are necessary, where they are useful, and even where there is enough time for testing both. Usually, the changes between debug and release are so minor that it doesn't cause anyone any issues at all.
If you've got a real QA group who can be counted on to fully test the thing, I'd say make debug builds until you get close to the release, and then make sure a full QA cycle is done on the same build that's going out the door.
Although in at least one case we released something that still had some debug code in it. The only consequence was it ran a tiny bit slower and the log files were pretty damn big.
In my company we have both Debug and Release.
- The developers use the debug version to properly find and fix bugs.
- We are using TDD and so we have a big test suite that we run on our server that tests both debug and release build configurations as well as 64/32 builds we have as well.
So if using the "debug" configuration helps a developer to find a bug faster there is no reason not to use it - when the code goes into the server (to be further tested) or reviewed we use the "Release" one.
I learned to build the release version with .PDB files long ago so that I could debug the release version. What a lot of programmers tend to forget is that when you run the debug version, with all the optimizations turned off, you are debugging a different program altogether. It may behave like the release build (for the most part), but it is still a different program than the release build.
In addition, debugging the release build is not that difficult. And if you get a crash dump, you have to be able to do it anyway.