I have a beginner question about xcode ARC. The following code works without memory issue because the memory is freed by the ARC.
- (void)viewDidLoad
{
[super viewDidLoad];
// test nsmutabledata
dispatch_queue_t testQueue = dispatch_queue_create("testQueue", NULL);
dispatch_async(testQueue, ^{
while (1) {
NSMutableData *testData = [[NSMutableData alloc]initWithCapacity:1024*1024*5];
NSLog(#"testData size: %d", testData.length);
}
});
}
However, the following does not, and gives me memory allocation error after a few seconds.
+ (NSMutableData *) testDataMethod
{
NSMutableData *testDataLocal = [[NSMutableData alloc]initWithCapacity:1024*1024*5];
return testDataLocal;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// test nsmutabledata
dispatch_queue_t testQueue = dispatch_queue_create("testQueue", NULL);
dispatch_async(testQueue, ^{
while (1) {
NSMutableData *testData = [RootViewController testDataMethod];
NSLog(#"testData size: %d", testData.length);
}
});
}
Do I have the wrong understanding of ARC? I though the testDataLocal is counted once but goes out of the scope when the method exits. testData is another count but at the next iteration of the loop testData should have no count, and be freed by the system.
In the first bit of code, the NSMutableData object is released at the end of each loop iteration which avoids any memory issues.
In the second bit of code, the return value of the testDataMethod is most likely being autoreleased. Since your app in running in a tight loop, the autorelease pool in never given a chance to be flushed so you quickly run out of memory.
Try changing your second bit of code to this:
while (1) {
#autoreleasepool {
NSMutableData *testData = [RootViewController testDataMethod];
NSLog(#"testData size: %d", testData.length);
}
}
Related
As you can see, the code below isnt doing much (all commented out) more than enumerating over a set of files, however, my memory usage is growing to over 2 GB after 40 seconds of running the function below which is launched by pressing a button on the UI.
I can run the UI for hours, and before pressing the button, the memory usage does not exceed 8MB.
Given that ARC is turned on, what is holding on to the memory?
removed original code as the edit below made no differance.
EDIT:
Attempted #autoreleasepool{ dispatch_asyny ... } and permutations of that around the while and inside the while loop which had no effect.
Here is the code with autorelasepool added and cleaned up
-(void) search{
self.dict = [[NSMutableDictionary alloc] init];
NSFileHandle *fileHandle = [NSFileHandle fileHandleForWritingAtPath:#"/tmp/SeaWall.log"];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSString *bundleRoot = #"/";
NSFileManager *manager = [NSFileManager defaultManager];
NSDirectoryEnumerator *direnum = [manager enumeratorAtPath:bundleRoot];
NSString *filename;
while ((filename = [NSString stringWithFormat:#"/%#", [direnum nextObject]] ) && !self.exit) {
#autoreleasepool {
NSString *ext = filename.pathExtension;
if ([ext hasSuffix:#"so"] || [ext hasSuffix:#"dylib"] ) {
if (filename == nil || [NSURL URLWithString:filename] == nil) {
continue;
}
NSData *nsData = [NSData dataWithContentsOfFile:filename];
if (nsData != nil){
NSString *str = [nsData MD5];
nsData = nil;
[self writeToLogFile:[NSString stringWithFormat:#"%# - %#", [filename lastPathComponent], str]];
}
}
ext = nil;
} // end autoreleasepool
}
[fileHandle closeFile];
[self ControlButtonAction:nil];
});
}
The memory is not exactly leaked: it is very much ready to be released, but it never has a chance to be.
ARC builds upon the manual memory management rules of Objective-C. The base rule is that "the object/function that calls init owns the new instance", and the owner must release the object when it no longer needs it.
This is a problem for convenience methods that create objects, like [NSData dataWithContentsOfFile:]. The rule means that the NSData class owns the instance, because it called init on it. Once the value will be returned, the class will no longer need the object, and it would need to release it. However, if this happens before the callee gets a chance to retain the instance, it will be gone before anything had a chance to happen.
To solve this problem, Cocoa introduces the autorelease method. This method transfers the ownership of the object to the last autorelease pool that was set up. Autorelease pools are "drained" when you exit their scope.
Cocoa/AppKit/UIKit automatically set up autorelease pools around event handlers, so you generally do not need to worry about that. However, if you have a long-running method, this becomes an issue.
You can declare an autorelease pool using the #autoreleasepool statement:
#autoreleasepool
{
// code here
}
At the closing bracket, the objects collected by the autorelease pool are released (and possibly deallocated, if no one else has a reference to them).
So you would need to wrap the body of your loop in this statement.
Here's an example. This code "leaks" about 10 megabytes every second on my computer, because the execution never leaves the #autoreleasepool scope:
int main(int argc, const char * argv[])
{
#autoreleasepool
{
while (true)
{
NSString* path = [NSString stringWithFormat:#"%s", argv[0]];
[NSData dataWithContentsOfFile:path];
}
}
}
On the other hand, with this, the memory usage stays stable, because execution leaves the #autoreleasepool scope at the end of every loop iteration:
int main(int argc, const char * argv[])
{
while (true)
{
#autoreleasepool
{
NSString* path = [NSString stringWithFormat:#"%s", argv[0]];
[NSData dataWithContentsOfFile:path];
}
}
}
Creating objects in the loop condition is awkward for long loops because these are not picked up by the inner #autoreleasepool. You will need to get these inside the #autoreleasepool scope as well.
Returning
Whenever we return an object (maybe to Swift), we need to register into nearest #autoreleasepool block (by calling autorelease method to prevent memory-leak, according to ownership-rules), but nowadays ARC does that automatically for us;
Whenever ARC disabled; after using alloc and/or init, call autorelease manually, like:
- (NSString *)fullName {
NSString *string = [[[NSString alloc] initWithFormat:#"%# %#",
self.firstName, self.lastName] autorelease];
return string;
}
Memory needs to be released by an autorelease pool.
Otherwise it will be locked up as you are experiencing and it will leak.
In your loop put:
#autoreleasepool { /* BODY */ }
I'm having a random EXC_BAD_ACCESS KERN_INVALID_ADDRESS, but I can't point out the source. However, I'm wondering if this might be it:
I have an audio_queue created like this:
_audio_queue = dispatch_queue_create("AudioQueue", nil);
which I use to create and access an object called _audioPlayer:
dispatch_async(_audio_queue, ^{
_audioPlayer = [[AudioPlayer alloc] init];
});
The audio player is owned by a MovieView:
#implementation MovieView
{
AudioPlayer *_audioPlayer
}
Then, in the dealloc method of MovieView, I have:
- (void)dealloc
{
dispatch_async(_audio_queue, ^{
[_audioPlayer destroy];
});
}
Is this acceptable? I'm thinking that by the time the block is called, the MovieView would have already been deallocated, and when trying to access the _audioPlayer, it no longer exists. Is this the case?
My crash report only says:
MovieView.m line 0
__destroy_helper_block_
Your bug is in the ivar access. This is due to how ivars work in ObjC: the -dealloc above is equivalent to
- (void)dealloc
{
dispatch_async(self->_audio_queue, ^{
[self->_audioPlayer stopPlaying];
});
}
This can break because you end up using self after it is dealloced.
The fix is something like
- (void)dealloc
{
AVAudioPlayer * audioPlayer = _audioPlayer;
dispatch_async(audio_queue, ^{
[audioPlayer stopPlaying];
});
}
(It is frequently not thread-safe to explicitly or implicitly (via ivars) reference self in a block. Sadly, I don't think there is a warning for this.)
If this is the cause, the you could use dispatch_sync
- (void)dealloc
{
dispatch_sync(_audio_queue, ^{
[_audioPlayer stopPlaying];
});
}
I haven't tested this
I have a small project that reads an HTTP stream from a remote server, demuxes it, extracts audio stream, decodes it into 16-bit PCM, and feeds into a corresponding AudioQueue. The decoder/demuxer/fetcher runs in a separate thread and it uses my home-grown blocking queue (see code below) to deliver the decoded frames to the AudioQueue callback. The queue uses NSMutableArray to store objects.
Once this thing is in-flight, it leaks objects inserted into the queue. Memory profiler says that the RefCt is 2 by the time I expect it to be 0 and to be released by ARC.
Here are the queue/dequeue methods:
- (id) dequeue {
dispatch_semaphore_wait(objectsReady, DISPATCH_TIME_FOREVER);
[lock lock];
id anObject = [queue objectAtIndex:0];
[queue removeObjectAtIndex:0];
[lock unlock];
dispatch_semaphore_signal(freeSlots);
return anObject;
}
- (void) enqueue:(id)element {
dispatch_semaphore_wait(freeSlots, DISPATCH_TIME_FOREVER);
[lock lock];
[queue addObject:element];
[lock unlock];
dispatch_semaphore_signal(objectsReady);
}
Producer thread does this:
[pAudioFrameQueue enqueue:[self convertAVFrameAudioToPcm:audioFrame]];
And the "convertAVFrameAudioToPcm" methods looks like this:
- (NSData*) convertAVFrameAudioToPcm:(AVFrame*)frame {
NSData* ret = nil;
int16_t* outputBuffer = malloc(outputByteLen);
// decode into outputBuffer and other stuff
ret = [NSData dataWithBytes:outputBuffer length:outputByteLen];
free(outputBuffer);
return ret;
}
Consumer does this:
- (void) fillAvailableAppleAudioBuffer:(AudioQueueBufferRef)bufferToFill {
#autoreleasepool {
NSData* nextAudioBuffer = [pAudioFrameQueue dequeue];
if (nextAudioBuffer != nil) {
[nextAudioBuffer getBytes:bufferToFill->mAudioData]; // I know this is not safe
bufferToFill->mAudioDataByteSize = nextAudioBuffer.length;
} else {
NSLog(#"ERR: End of stream...");
}
}
}
To me it looks like RefCt should become 0 when fillAvailableAppleAudioBuffer exits, but apparently ARC disagrees and does not release the object.
Am I having a bug in my simple queue code?
Or do I instantiate NSData in a wrong way?
Or am I missing some special rule of how ARC works between threads? By the way, the producer threads starts like this:
- (BOOL) startFrameFetcher {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,
(unsigned long)NULL),
^(void) {
[self frameFetcherThread];
});
return YES;
}
Any hints will be much appreciated!
PS: and last but not the least, I do have another instance of the same blocking queue that stores video frames that I dequeue and show via NSTimer. Video frames do not leak! I am guessing this may have something to do with threading. Otherwise, I would have expected to see the leak in both queues.
I have an application in which a long running process (> 1 min) is placed onto an NSOperationQueue (Queue A). The UI is fully-responsive while the Queue A operation runs, exactly as expected.
However, I have a different kind of operation the user can perform which runs on a completely separate NSOperationQueue (Queue B).
When a UI event triggers the placement of an operation on Queue B, it must wait until after the currently-executing operation on Queue A finishes. This occurs on an iPod Touch (MC544LL).
What I expected to see instead was that any operation placed onto Queue B would more or less begin immediately executing in parallel with the operation on Queue A. This is the behavior I see on the Simulator.
My question is two parts:
Is the behavior I'm seeing on my device to be expected based on available documentation?
Using NSOperation/NSOperationQueue, how do I pre-empt the currently running operation on Queue A with a new operation placed on Queue B?
Note: I can get exactly the behavior I'm after by using GCD queues for Queues A/B, so I know my device is capable of supporting what I'm trying to do. However, I really, really want to use NSOperationQueue because both operations need to be cancelable.
I have a simple test application:
The ViewController is:
//
// ViewController.m
// QueueTest
//
#import "ViewController.h"
#interface ViewController ()
#property (strong, nonatomic) NSOperationQueue *slowQueue;
#property (strong, nonatomic) NSOperationQueue *fastQueue;
#end
#implementation ViewController
-(id)initWithCoder:(NSCoder *)aDecoder
{
if (self = [super initWithCoder:aDecoder]) {
self.slowQueue = [[NSOperationQueue alloc] init];
self.fastQueue = [[NSOperationQueue alloc] init];
}
return self;
}
-(void)viewDidLoad
{
NSLog(#"View loaded on thread %#", [NSThread currentThread]);
}
// Responds to "Slow Op Start" button
- (IBAction)slowOpStartPressed:(id)sender {
NSBlockOperation *operation = [[NSBlockOperation alloc] init];
[operation addExecutionBlock:^{
[self workHard:600];
}];
[self.slowQueue addOperation:operation];
}
// Responds to "Fast Op Start" button
- (IBAction)fastOpStart:(id)sender {
NSBlockOperation *operation = [[NSBlockOperation alloc] init];
[operation addExecutionBlock:^{
NSLog(#"Fast operation on thread %#", [NSThread currentThread]);
}];
[self.fastQueue addOperation:operation];
}
-(void)workHard:(NSUInteger)iterations
{
NSLog(#"SlowOperation start on thread %#", [NSThread currentThread]);
NSDecimalNumber *result = [[NSDecimalNumber alloc] initWithString:#"0"];
for (NSUInteger i = 0; i < iterations; i++) {
NSDecimalNumber *outer = [[NSDecimalNumber alloc] initWithUnsignedInteger:i];
for (NSUInteger j = 0; j < iterations; j++) {
NSDecimalNumber *inner = [[NSDecimalNumber alloc] initWithUnsignedInteger:j];
NSDecimalNumber *product = [outer decimalNumberByMultiplyingBy:inner];
result = [result decimalNumberByAdding:product];
}
result = [result decimalNumberByAdding:outer];
}
NSLog(#"SlowOperation end");
}
#end
The output I see after first pressing the "Slow Op Start" button followed ~1 second later by pressing the "Fast Op Start" button is:
2012-11-28 07:41:13.051 QueueTest[12558:907] View loaded on thread <NSThread: 0x1d51ec30>{name = (null), num = 1}
2012-11-28 07:41:14.745 QueueTest[12558:1703] SlowOperation start on thread <NSThread: 0x1d55e5f0>{name = (null), num = 3}
2012-11-28 07:41:25.127 QueueTest[12558:1703] SlowOperation end
2012-11-28 07:41:25.913 QueueTest[12558:3907] Fast operation on thread <NSThread: 0x1e36d4c0>{name = (null), num = 4}
As you can see, the second operation does not begin executing until after the first operation finishes, despite the fact that these are two separate (and presumably independent) NSOperationQueues.
I have read the Apple Concurrency Guide, but find nothing describing this situation. I've also read two SO questions on related topics (link, link), but neither seems to get to the heart of the problem I'm seeing (pre-emption).
Other things I've tried:
setting the queuePriority on each NSOperation
setting the queuePriority on each NSOperation while placing both types of operations onto the same queue
placing both operations onto the same queue
This question has undergone multiple edits, which may make certain comments/answers difficult to understand.
I suspect the problem you are having is that both operation queues are executing their blocks on the underlying default priority dispatch queue. Consequently, if several slow operations are enqueued before the fast operations then perhaps you will see this behaviour.
Why not either set the NSOperationQueue instance for the slow operations so that it only executes one operation at any given time (i.e. set maxConcurrentOperationCount to one for this queue), or if your operations are all blocks then why not use GCD queues directly? e.g.
static dispatch_queue_t slowOpQueue = NULL;
static dispatch_queue_t fastOpQueue = NULL;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
slowOpQueue = dispatch_queue_create("Slow Ops Queue", NULL);
fastOpQueue = dispatch_queue_create("Fast Ops Queue", DISPATCH_QUEUE_CONCURRENT);
});
for (NSUInteger slowOpIndex = 0; slowOpIndex < 5; slowOpIndex++) {
dispatch_async(slowOpQueue, ^(void) {
NSLog(#"* Starting slow op %d.", slowOpIndex);
for (NSUInteger delayLoop = 0; delayLoop < 1000; delayLoop++) {
putchar('.');
}
NSLog(#"* Ending slow op %d.", slowOpIndex);
});
}
for (NSUInteger fastBlockIndex = 0; fastBlockIndex < 10; fastBlockIndex++) {
dispatch_async(fastOpQueue, ^(void) {
NSLog(#"Starting fast op %d.", fastBlockIndex);
NSLog(#"Ending fast op %d.", fastBlockIndex);
});
}
As far as using the NSOperationQueue as per your comments about needing the operation cancellation facilities etc. can you try:
- (void)loadSlowQueue
{
[self.slowQueue setMaxConcurrentOperationCount:1];
NSBlockOperation *operation = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"begin slow block 1");
[self workHard:500];
NSLog(#"end slow block 1");
}];
NSBlockOperation *operation2 = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"begin slow block 2");
[self workHard:500];
NSLog(#"end slow block 2");
}];
[self.slowQueue addOperation:operation];
[self.slowQueue addOperation:operation2];
}
As I think the two blocks you add to the operation on the slow queue are being executed in parallel on the default queue and preventing your fast operations from being scheduled.
Edit:
If you're still finding the default GCD queue is choking, why not create an NSOperation subclass that executes blocks without using GCD at all for your slow operations, this will still give you the declarative convenience of not creating a separate subclass for each operation but use the threading model of a regular NSOperation. e.g.
#import <Foundation/Foundation.h>
typedef void (^BlockOperation)(NSOperation *containingOperation);
#interface PseudoBlockOperation : NSOperation
- (id)initWithBlock:(BlockOperation)block;
- (void)addBlock:(BlockOperation)block;
#end
And then for the implementation:
#import "PseudoBlockOperation.h"
#interface PseudoBlockOperation()
#property (nonatomic, strong) NSMutableArray *blocks;
#end
#implementation PseudoBlockOperation
#synthesize blocks;
- (id)init
{
self = [super init];
if (self) {
blocks = [[NSMutableArray alloc] initWithCapacity:1];
}
return self;
}
- (id)initWithBlock:(BlockOperation)block
{
self = [self init];
if (self) {
[blocks addObject:[block copy]];
}
return self;
}
- (void)main
{
#autoreleasepool {
for (BlockOperation block in blocks) {
block(self);
}
}
}
- (void)addBlock:(BlockOperation)block
{
[blocks addObject:[block copy]];
}
#end
Then in your code you can do something like:
PseudoBlockOperation *operation = [[PseudoBlockOperation alloc] init];
[operation addBlock:^(NSOperation *operation) {
if (!operation.isCancelled) {
NSLog(#"begin slow block 1");
[self workHard:500];
NSLog(#"end slow block 1");
}
}];
[operation addBlock:^(NSOperation *operation) {
if (!operation.isCancelled) {
NSLog(#"begin slow block 2");
[self workHard:500];
NSLog(#"end slow block 2");
}
}];
[self.slowQueue addOperation:operation];
Note that in this example any blocks that are added to the same operation will be executed sequentially rather than concurrently, to execute concurrently create one operation per block. This has the advantage over NSBlockOperation in that you can pass parameters into the block by changing the definition of BlockOperation - here I passed the containing operation, but you could pass whatever other context is required.
Hope that helps.
I'm playing around with blocks in Objective-C, trying to come up with a reusable mechanism that will take an arbitrary block of code and a lock object and then execute the block of code on a new thread, synchronized on the provided lock. The idea is to come up with a simple way to move all synchronization overhead/waiting off of the main thread so that an app's UI will always be responsive.
The code I've come up with is pretty straightforward, it goes like:
- (void) executeBlock: (void (^)(void))block {
block();
}
- (void) runAsyncBlock: (void (^)(void))block withLock:(id)lock {
void(^syncBlock)() = ^{
#synchronized(lock) {
block();
}
};
[self performSelectorInBackground:#selector(executeBlock:) withObject:syncBlock];
}
So for example, you might have some methods that go like:
- (void) addObjectToSharedArray:(id) theObj {
#synchronized(array) {
[array addObject: theObj];
}
}
- (void) removeObjectFromSharedArray:(id) theObj {
#synchronized(array) {
[array removeObject: theObj];
}
}
Which works fine, but blocks the calling thread while waiting for the lock. These could be rewritten as:
- (void) addObjectToSharedArray:(id) theObj {
[self runAsyncBlock:^{
[array addObject: theObj];
} withLock: array];
}
- (void) removeObjectFromSharedArray:(id) theObj {
[self runAsyncBlock: ^{
[array removeObject: theObj];
} withLock:array];
}
Which should always return immediately, since only the background threads will compete over the lock.
The problem is, this code crashes after executeBlock: without producing any output, error message, crash log, or any other useful thing. Is there something fundamentally flawed in my approach? If not, any suggestions with respect to why this might be crashing?
Edit:
Interestingly, it works without crashing if I simply do:
- (void) runAsyncBlock: (void (^)(void))block withLock:(id)lock {
void(^syncBlock)() = ^{
#synchronized(lock) {
block();
}
};
syncBlock();
}
But of course this will block the calling thread, which largely defeats the purpose. Is it possible that blocks do not cross thread boundaries? I would think not, since that would largely defeat the purpose of having them in the first place.
remember to call [block copy] otherwise it is not correctly retained because block are created on stack and destroyed when exit scope and unless you call copy it will not move to heap even retain is called.
- (void) runAsyncBlock: (void (^)(void))block withLock:(id)lock {
block = [[block copy] autorelease];
void(^syncBlock)() = ^{
#synchronized(lock) {
block();
}
};
syncBlock = [[syncBlock copy] autorelease];
[self performSelectorInBackground:#selector(executeBlock:) withObject:syncBlock];
}