The Transitioning to ARC Release Notes makes this statement:
One issue to be aware of is that the optimizer is not run in common
debug configurations, so expect to see a lot more retain/release
traffic at -O0 than at -Os.
How can we enable the optimizer in a default debug configuration?
You can set the optimization level in Xcode's Build Settings independently of for the Debug and Release configurations - just go to build settings, scroll down till you find the optimization setting, and pick the one you want from the menu.
Note: You should probably only do this for curiosity (which is to be encouraged :-)), as optimization can (re)move code etc. debugging may become a little harder, e.g. a variable may "disappear" so you can't so easily track its value as its been assigned to a register.
Related
I am having problem when I am trying to run the following verilog code snippet in Optimized mode using Modelsim simulator v10.2c.
always # *
if (dut.rtl_module.enable == 1'b1)
force dut.rtl_module.abc_reg = xyz;
If the above snippet is run in non-optimized mode, this works fine. But for optimized mode, it fails.
PS: I am using -O5 optimization level
Optimisation typically disables access to simulator objects. Your force command requires that access.
You'll need to explicitly enable access. Unfortunately I can't see anything useful in the Modelsim AE documentation, however from Riviera-PRO:
+accs
Enables access to design structure. This is the default in -O0,
-O1 and -O2 and may optionally be specified for -O3. If omitted,
the compiler may limit read access to selected objects.
Modelsim supports +acc, it just doesn't appear to be well documented. The only reference appears to be this suggestion:
While optimization is not necessary for class based debugging, you might want to use
vsim -voptargs=+acc=lprn to enable visibility into your design for RTL debugging.
I'm completely stumped. I've been debugging for over a year and have only had this problem when the Build Configuration was set to Release. I have the Build Configuration set to Debug and I have checked to be sure I am attaching to the correct process and yet I still cannot see the values while stepping through the code. Has anybody else ran into this issue?
Here is a screen shot:
The value is returning, but I am unable to see the values of ANYTHING in this method or any of the other methods and I cannot figure out why.
Thank you for any hints you can give me.
============================== UPDATE ==================================
I've tried to print out the valued and this is the output I receive:
Notice though, that the value in the Variables view is correct for the result, even though I can't print it out. But the other values, like filePath should not be nil.
This is so weird.
============================== UPDATE ==================================
I put the breakpoint on the return statement and still no luck:
This time I see no value for result:
I was facing the same issue, On mouse hover and watch section for every variable it was showing nil value but while NSLog for those variables it was printing correct value.
now it's fixed. Try this
Go to Product menu > Scheme > Edit scheme or ⌘+<
In Run go to Info tab, change the build configuration to "debug".
Hope it will work.
#Lucy, ideally you shouldn't have to run in Release profile when debugging, Release is meant as an App Distribution profile with a small package size and little / no debug symbols.
But if you must debug on Release (e.g. due to Environment, data reasons), make sure you have the Optimization Level set to 'None' for Release.
You can reach the config above through:
1) Project wide settings - affects all Targets
2) Target specific setting - affects a single Target only
Don't forget to switch this back before App Store distribution. Lower Optimization will generate a larger ipk increasing your users' download time - also the potential dSYM generated.
Background
For context, Debug, Adhoc or Release profiles are purely to id different build configurations that XCode ships with. XCode starts projects with a Debug profile with no Optimization (and various other predefined config related to development & debugging), so symbol loading by the interpreter is possible - you're free to tweak this however you wish.
Theoretically able to establish a Release build profile that's identical to Debug.
I do see part of your problem. notice how self is nil?
That means you are in a method call for a de-allocated object. Turn on NSZombies to debug.
see this SO answer for how to enable NSZombie
Given that it is legal to send messages to nil in objective-C, I would suspect that the object is being deallocated as a side effect of calling doesLicenseFileExist, or by another thread.
You may find it helpful to put a breakpoint in IDLicenseCommands -dealloc method.
I had this problem, and turned "Link-Time Optimization" from "Incremental" to "No" and it resolved the debugging issue.
I am trying to build PETSc and have problems to enable optimization. Without specifying, PETSc always creates a debugging build, but I can turn that off with passing --with-debugging=0 to cmake. However, this only enables -O1 by default, but as my application is extreme time consuming and very time critical, I want to have at least -O2. I can't find an option except --CFLAGS, which works, but always appends options to the end, so -O1 would override my -O2.
I greped for "-O" to set the flag manually, this gave me a million lines, mostly from the configure.log file and doesn't help.
Does anybody know the file where to set the flag, or a workaround like ...another option that disables the usage of the last specified -O#, but enables the strongest or first?
Citing PETSc' install instructions:
Configure defaults to building PETSc in debug mode. One can switch to
using optimzed mode with the toggle option --with-debugging [defaults
to debug enabled]. Additionally one can specify more suitable
optimization flags with the options COPTFLAGS, FOPTFLAGS, CXXOPTFLAGS.
./configure --with-cc=gcc --with-fc=gfortran --with-debugging=0
COPTFLAGS='-O3 -march=p4 -mtune=p4' FOPTFLAGS='-O3 -qarch=p4
-qtune=p4'
As a best practice, do you run code analysis on both debug and release builds, or just one or the other?
If for some reason the two builds are different (and they really shouldn't be for static analysis purposes), you should ensure that your metrics are running against what's actually going out to production.
Ideally, you should have a CI server, and the commands that developers run to initiate such analysis are no different from what the CI server does.
I usually pick one and that one is the release build. I guess it doesn't really matter but I tend to think that when gather information about what will run in production it is best to test exactly what will go to production (this goes for analysis, profiling, benchmarking, etc.).
Static Code Analysis will show the same results regardless of your build type.
Debug/Release only changes the resulting assembly and the inclusion or exclusion of debugging information at runtime.
I don't have separate ‘debug’ and ‘release’ builds (see Separate ‘debug’ and ‘release’ builds?).
The LLVM folks actually recommend analyzing the DEBUG configuraion:
ALWAYS analyze a project in its "debug" configuration
Most projects can be built in a "debug" mode that enables assertions.
Assertions are picked up by the static analyzer to prune infeasible
paths, which in some cases can greatly reduce the number of false
positives (bogus error reports) emitted by the tool.
In addition, debug builds tend to be faster (no need for optimization), and in the CI world faster is always better (all else being equal).
I am working on lock free structure with g++ compiler. It seems that with -o1 switch, g++ will change the execution order of my code. How can I forbid g++'s optimization on certain part of my code while maintain the optimization to other part? I know I can split it to two files and link them, but it looks ugly.
If you find that gcc changes the order of execution in your code, you should consider using a memory barrier. Just don't assume that volatile variables will protect you from that issue. They will only make sure that in a single thread, the behavior is what the language guarantees, and will always read variables from their memory location to account for changes "invisible" to the executing code. (e.g changes to a variable done by a signal handler).
GCC supports OpenMP since version 4.2. You can use it to create a memory barrier with a special #pragma directive.
A very good insight about locking free code is this PDF by Herb Sutter and Andrei Alexandrescu: C++ and the Perils of Double-Checked Locking
You can use a function attribute "__attribute__ ((optimize 0))" to set the optimization for a single function, or "#pragma GCC optimize" for a block of code. These are only for GCC 4.4, though, I think - check your GCC manual. If they aren't supported, separation of the source is your only option.
I would also say, though, that if your code fails with optimization turned on, it is most likely that your code is just wrong, especially as you're trying to do something that is fundamentally very difficult. The processor will potentially perform reordering on your code (within the limits of sequential consistency) so any re-ordering that you're getting with GCC could potentially occur anyway.