Understanding a malloc_history dump - objective-c

If you have ever asked how can I debug releasing/alloc issues in objective-c, you will have came across these environment settings that can help track the problem down:
NSZombieEnabled - Keeps abjects around after release, so you can get pointers etc.
MallocStackLogging - keeps object history for reference later
NSDebugEnabled
You set all of these to YES in the 'environment' section of the 'arguments' tab in the 'executables' (found in group tree) info.
So, Im getting this console output
MyApp [4413:40b] -[CALayer
retainCount]: message sent to
deallocated instance 0x4dbb170
then open terminal, while the debugger has forwarded the break and type:
malloc_history 4413 0x4dbb170
Then, I get a big text dump, and as far as I understand the important bit is this:
1
ALLOC 0x4dbb160-0x4dbb171 [size=18]:
thread_a0375540 |start | main |
UIApplicationMain | GSEventRun |
GSEventRunModal | CFRunLoopRunInMode |
CFRunLoopRunSpecific | __CFRunLoopRun
| __CFRunLoopDoTimer |
__CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__
| __NSFireDelayedPerform |
-[todoListViewController drillDocumentMenu:] |
-[documentListViewController drillIntoDocumentWithToDoRecord:] |
-[documentViewController OpenTodoDocument:OfType:WithPath:] |
-[documentViewController OpenDocumentOfType:WithPath:] |
-[documentViewController managePDFDocumentWithPath:] |
-[PDFDocument loadPDFDocumentWithPath:andTitle:] |
-[PDFDocument getMetaData] | CGPDFDictionaryApplyFunction |
ListDictionaryObjects(char const*,
CGPDFObject*, void*) | NSLog | NSLogv
| _CFLogvEx | __CFLogCString |
asl_send | _asl_send_level_message |
asl_set_query | strdup | malloc |
malloc_zone_malloc
2
FREE 0x4dbb160-0x4dbb171 [size=18]:
thread_a0375540 |start | main |
UIApplicationMain | GSEventRun |
GSEventRunModal | CFRunLoopRunInMode |
CFRunLoopRunSpecific | __CFRunLoopRun
| __CFRunLoopDoTimer |
__CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__
| __NSFireDelayedPerform |
-[todoListViewController drillDocumentMenu:] |
-[documentListViewController drillIntoDocumentWithToDoRecord:] |
-[documentViewController OpenTodoDocument:OfType:WithPath:] |
-[documentViewController OpenDocumentOfType:WithPath:] |
-[documentViewController managePDFDocumentWithPath:] |
-[PDFDocument loadPDFDocumentWithPath:andTitle:] |
-[PDFDocument getMetaData] | CGPDFDictionaryApplyFunction |
ListDictionaryObjects(char const*,
CGPDFObject*, void*) | NSLog | NSLogv
| _CFLogvEx | __CFLogCString |
asl_send | _asl_send_level_message |
asl_free | free
3
ALLOC 0x4dbb170-0x4dbb19f [size=48]:
thread_a0375540 |start | main |
UIApplicationMain | GSEventRun |
GSEventRunModal | CFRunLoopRunInMode |
CFRunLoopRunSpecific | __CFRunLoopRun
| __CFRunLoopDoTimer |
__CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__
| __NSFireDelayedPerform |
-[todoListViewController drillDocumentMenu:] |
-[documentListViewController drillIntoDocumentWithToDoRecord:] |
-[documentViewController OpenTodoDocument:OfType:WithPath:] |
-[documentViewController OpenDocumentOfType:WithPath:] |
-[documentViewController managePDFDocumentWithPath:] |
-[ScrollViewWithPagingViewController init] | -[UIView init] |
-[UIScrollView initWithFrame:] | -[UIView initWithFrame:] | UIViewCommonInitWithFrame | -[UIView
_createLayerWithFrame:] | +[NSObject(NSObject) alloc] | +[NSObject(NSObject) allocWithZone:] | class_createInstance |
_internal_class_createInstanceFromZone | calloc | malloc_zone_calloc
What i don't understand though is if it the history was ALLOC,FREE,ALLOC then why does the error indicate that it was released (net +1 alloc)?
or is my understanding of the dump wrong?
EDIT (fresh run= different object pointers):
Zombie Detection with instruments:
Why and how, does the retain count jump from 1 to -1?
Looking at the backtrace of the Zombie, looks like the retain count is being called by: Quartz through release_root_if_unused
Edit: Solved- I was removing a view from super, then releasing it. Fixed by just releasing it.

#Kay is correct; the malloc history is showing two allocations at the specified address; one that has been allocated and freed and one that is still in play.
What you need is the backtrace of the call to retainCount on the CALayer that has already been released. Because you have zombie detection enabled, amongst other memory debugging things, it may be that the deallocation simply has not & will not happen.
Mixing malloc history with zombie detection changes the runtime behavior significantly.
I'd suggest running with zombie detection in Instruments. Hopefully, that'll pinpoint the exact problem.
If not, then there is a breakpoint you can set to break when a zombie is messaged. Set that breakpoint and see where you stop.
OK -- so, CoreAnimation is using the retain count for internal purposes (the system frameworks can get away with this, fragile though it is).
I think the -1 is a red herring; it is likely that zombies return 0xFF....FFFF as the retain count and this is rendered as -1 in Instruments.
Next best guess; since this is happening in a timer, the over-release is probably happening during animation. The CoreAnimation layers should handle this correctly. There is an over-release of a view or animation layer container in your code that is causing the layer to go away prematurely.
Have you tried "Build and Analyze"? Off-chance it might catch the mismanagement of a view somewhere.
In any case and as an experiment, try retaining your view(s) an extra time and see if that makes this problem stop. If it does, that is, at least, a clue.
(Or it might be a bug in the system frameworks... maybe... but doubtful.)
Finally, who the heck is calling retainCount?!?!? In the case of CoreAnimation, it is possible that the retainCount is being used internally as an implementation detail.
If it is your code, though, then the location of the zombie call should be pretty apparent.

I am no expert, but if you take a look at the first line in block 3:
ALLOC 0x4dbb170-0x4dbb19f [size=48]:
While in the other two outputs a memory block of size 18 at 0x4dbb160-0x4dbb171 was allocated and freed. I assume the old object was freed and there is a new object residing at this memory address. Thus the old instance at 0x...b160 is not valid any longer.

Related

How to find out where the main function in an .exe file is?

I created a simple .exe file that just assigns a value of 3 to an integer called "x" and then prints out that value.Here is a picture of the source code:
source code
I opened the .exe file with an hex editor(named HxD) and used the disassembly function of Visual Studio 2017 to show me the opcodes of my main function. After a bit of search i found out that the main function is stored in the file at Offset 0xC10
Here is the disassembly:disassembly
And here is the file in the Hex-Editor:hexadecimal view of .exe file
I know that some values of the .exe file in the hex editor vary from what the visual studio debugger says but i know that the main starts there because i changed the value of x in the hex editor and then when i started the .exe it printed out another value instead of the 3. My question is where in the .exe file is the value that says:"At that point of the file start the opcodes of the main function."
For example in a .bmp file the 4 bytes at positions 0x0A,0x0B,0x0C and 0x0D tell you the offset of the first byte of the first pixel.
On Windows, the entry point of an executable (.exe) is set in the PE Header of the file.
WikiPedia illustrates the structure of this header like this (SVG file).
Relative to the beginning of the file, the PE Header starts at the position indicated at the address
DWORD 0x3C Pointer to PE Header
File Header / DOS Header
+--------------------+--------------------+
0000 | 0x5A4D | | |
0008 | | |
0010 | | |
0018 | | |
0020 | | |
0028 | | |
0030 | | |
0038 | | PE Header addr |
0040 | | |
.... | .................. | .................. |
And the entry point is designated at the position (relative to the address above)
DWORD 0x28 EntryPoint
PE Header
+--------------------+--------------------+
0000 | Signature | Machine | NumOfSect|
0008 | TimeDateStamp | PtrToSymTable |
0010 | NumOfSymTable |SizOfOHdr| Chars |
0018 | Magic | MJV| MNV | SizeOfCode |
0020 | SizeOfInitData | SizeOfUnInitData |
0028 | EntryPoint (RVA) | BaseOfCode (RVA) |
0030 | BaseOfData (RVA) | ImageBase |
0038 | SectionAlignment | FileAlignment |
0040 | ... | ... |
from the beginning of the PE Header. This address is a RVA (Relative Virtual Address) what means that it is relative to the Image Base address that the file is loaded to by the loader:
Relative virtual addresses (RVAs) are not to be confused with standard virtual addresses. A relative virtual address is the virtual address of an object from the file once it is loaded into memory, minus the base address of the file image.
This address is the address of the main function.
A .exe is a portable executable.
Layout
Structure of a Portable Executable 32 bit
A PE file consists of a number of headers and sections that tell the dynamic linker how to map the file into memory. An executable image consists of several different regions, each of which require different memory protection; so the start of each section must be aligned to a page boundary.[4] For instance, typically the .text section (which holds program code) is mapped as execute/readonly, and ...
So really it is a question of where the .text section is in the file. Exactly where depends on the headers and locations of the other sections.

Can GraphDB load 10 million statements with OWL reasoning?

I am struggling to load most of the Drug Ontology OWL files and most of the ChEBI OWL files into GraphDB free v8.3 repository with Optimized OWL Horst reasoning on.
is this possible? Should I do something other than "be patient?"
Details:
I'm using the loadrdf offline bulk loader to populate an AWS r4.16xlarge instance with 488.0 GiB and 64 vCPUs
Over the weekend, I played around with different pool buffer sizes and found that most of these files individually load fastest with a pool buffer of 2,000 or 20,000 statements instead of the suggested 200,000. I also added -Xmx470g to the loadrdf script. Most of the OWL files would load individually in less than one hour.
Around 10 pm EDT last night, I started to load all of the files listed below simultaneously. Now it's 11 hours later, and there are still millions of statements to go. The load rate is around 70/second now. It appears that only 30% of my RAM is being used, but the CPU load is consistently around 60.
are there websites that document other people doing something of this scale?
should I be using a different reasoning configuration? I chose this configuration as it was the fastest loading OWL configuration, based on my experiments over the weekend. I think I will need to look for relationships that go beyond rdfs:subClassOf.
Files I'm trying to load:
+-------------+------------+---------------------+
| bytes | statements | file |
+-------------+------------+---------------------+
| 471,265,716 | 4,268,532 | chebi.owl |
| 61,529 | 451 | chebi-disjoints.owl |
| 82,449 | 1,076 | chebi-proteins.owl |
| 10,237,338 | 135,369 | dron-chebi.owl |
| 2,374 | 16 | dron-full.owl |
| 170,896 | 2,257 | dron-hand.owl |
| 140,434,070 | 1,986,609 | dron-ingredient.owl |
| 2,391 | 16 | dron-lite.owl |
| 234,853,064 | 2,495,144 | dron-ndc.owl |
| 4,970 | 28 | dron-pro.owl |
| 37,198,480 | 301,031 | dron-rxnorm.owl |
| 137,507 | 1,228 | dron-upper.owl |
+-------------+------------+---------------------+
#MarkMiller you can take a look at the Preload tool, which is part of GraphDB 8.4.0 release. It's specially designed to handle large amount of data with constant speed. Note that it works without inference, so you'll need to load your data and then change the ruleset and reinfer the statements.
http://graphdb.ontotext.com/documentation/free/loading-data-using-preload.html
Just typing out #Konstantin Petrov's correct suggestion with tidier formatting. All of these queries should be run in the repository of interest... at some point in working this out, I misled myself into thinking that I should be connected to the SYSTEM repo when running these queries.
All of these queries also require the following prefix definition
prefix sys: <http://www.ontotext.com/owlim/system#>
This doesn't directly address the timing/performance of loading large datasets into an OWL reasoning repository, but it does show how to switch to a higher level of reasoning after loading lots of triples into a no-inference ("empty" ruleset) repository.
Could start by querying for the current reasoning level/rule set, and then run this same select statement after each insert.
SELECT ?state ?ruleset {
?state sys:listRulesets ?ruleset
}
Add a predefined ruleset
INSERT DATA {
_:b sys:addRuleset "rdfsplus-optimized"
}
Make the new ruleset the default
INSERT DATA {
_:b sys:defaultRuleset "rdfsplus-optimized"
}
Re-infer... could take a long time!
INSERT DATA {
[] <http://www.ontotext.com/owlim/system#reinfer> []
}

How to know how much time spent on compile a class or method in hotspot?

1, I want to know how much time spent on compile a class or method in hotspot during JIT (We got some timeout issue and we suspect it maybe caused by a long compilation time)? Is there any trace flag or other ways to trace this time?
2,BTW, if the method run on first time , then the compilation time would is 0 as there is totally no compilation process, right?
JVM flags: -XX:+PrintCompilation -XX:+UnlockDiagnosticVMOptions -XX:+PrintCompilation2
289 425 4 java.time.LocalDate::until (116 bytes)
292 360 3 java.time.ZoneId::of (85 bytes) made not entrant
293 426 4 java.time.LocalDate::from (68 bytes)
293 386 3 java.time.LocalDate::from (68 bytes) made not entrant
293 426 size: 248(96) time: 0 inlined: 54 bytes
297 425 size: 3688(2272) time: 8 inlined: 1092 bytes
^ ^ ^ ^ ^
| | | | |
| | compiled bytes | bytecodes inlined
| compilation ID method compilation time (ms)
timestamp (ms from JVM start)
Note that
JIT compiler works in background while the application is running; it is unlikely to cause delays or timeouts.
There are typically multiple compiler threads; PrintCompilation output may appear interleaved.
A method may be (re)compiled multiple times with a different level of optimization.
jitwatch can record and display jit compilation logs.

How do I find out where a message is being sent from?

Our app is crashing during our tests, sending a message to a deallocated UINavigationItem. I've looked using Instruments, but all the releases and retains look balanced; it looks like something is hanging onto the variable without retaining it. I'd like to find where the message is being sent from, so that I can make sure that the object stays alive long enough to receive it.
The error in the console is:
-[UINavigationItem safeValueForKey:]: message sent to deallocated instance 0x11afab80
The stack trace is:
0 CoreFoundation ___forwarding___
1 CoreFoundation _CF_forwarding_prep_0
2 UIKit -[UINavigationItemButtonViewAccessibility(SafeCategory) accessibilityTraits]
3 UIAccessibility -[NSObject(NSObjectAccessibility) accessibilityAttributeValue:]
4 UIAccessibility _copyAttributeValueCallback
5 AXRuntime _AXXMIGCopyAttributeValue
6 AXRuntime _XCopyAttributeValue
7 AXRuntime mshMIGPerform
8 CoreFoundation __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__
9 CoreFoundation __CFRunLoopDoSource1
10 CoreFoundation __CFRunLoopRun
11 CoreFoundation CFRunLoopRunSpecific
12 CoreFoundation CFRunLoopRunInMode
13 GraphicsServices GSEventRunModal
14 GraphicsServices GSEventRun
15 UIKit UIApplicationMain
16 MyApp Functional Tests main [myapp]/main.m:14
17 MyApp Functional Tests start
...but none of that is in my code. How do I find out where the message is being sent from?
Use the Command:
Shell malloc_history process_id memory
eg. Shell malloc_history process_id 0x11afab80
Enable Following for the same
1)MallocstackLogging
2) NsDebugEnabled
3)NSZombieEnabled
this will solve the problem

Objective C: Accelerometer directions

I'm trying to detect a shaking motion in an up and down direction like this:
/ \
|
________________________________
| |
| |
| |
|o |
| |
| |
|______________________________|
|
\ /
-(void) bounce:(UIAcceleration *)acceleration {
NSLog(#"%f",acceleration.x);
}
I was thinking it's the x axis but that responds if you turn it to be parallel to the floor. How do I detect this?
I don't have a direct answer for you, but a great experiment is to make a quick app which spews the numbers out onto the screen or into a file (or both). Then shake it the way to you want to detect, and see which set has the greatest change.