Good morning,
I encounter a memory management issue in the video processing software i'm trying to write. (video capture + (almost)real-time processing + display + recording).
The following code is part of the "..didOutputSampleBuffer.." function of AVCaptureVideoDataOutputSampleBufferDelegate.
capturePreviewLayer is a CALayer.
ctx is a CIContext I reuse over and over.
outImage is a vImage_Buffer.
With the commented section kept commented, memory usage is stable and acceptable, but if I uncomment it, memory won't stop increasing. Note that if I leave the filtering operation commented and only keep CIImage creation and conversion back to CGImageRef, the problem remains. (I mean : I don't think this is related to the filter itself).
If I run the XCode's Analyse, it points a potential memory leak if this part is uncommented, but none if it is commented.
Does anybody has an idea to explain and fix this ?
Thank you very much !
Note : I prefer not to use AVCaptureVideoPreviewLayer and its filters property.
CGImageRef convertedImage = vImageCreateCGImageFromBuffer(&outImage, &outputFormat, NULL, NULL, 0, &err);
//CIImage * img = [CIImage imageWithCGImage:convertedImage];
////[acc setValue:img forKey:#"inputImage"];
////img = [acc valueForKey:#"outputImage"];
//convertedImage = [self.ctx createCGImage:img fromRect:img.extent];
dispatch_sync(dispatch_get_main_queue(), ^{
self.capturePreviewLayer.contents = (__bridge id)(convertedImage);
});
CGImageRelease(convertedImage);
free(outImage.data);
Both vImageCreateCGImageFromBuffer() and -[CIContext createCGImage:fromRect:] give you a reference you are responsible for releasing. You are only releasing one of them.
When you replace the value of convertedImage with the new CGImageRef, you are losing the reference to the previous one without releasing it. You need to add another call to CGImageRelease(convertedImage) after your last use of the old image and before you lose that reference.
Related
I am trying to render 3 separate things to one texture in Metal.
I have a MTLTexture that is used as a destination in 3 different MTLCommandBuffers. I commit them one after another. Each MTLCommandBuffer renders to a separate part of the texture - first draws in the 0 - 1/3 part, second draws the middle 1/3 - 2/3 and the last one draws 2/3 - 1.
id<MTLTexture> dst_texture = ...;
id<MTLCommandBuffer> buffer1 = [self drawToTexture:dst_texture];
[buffer1 commit];
id<MTLCommandBuffer> buffer2 = [self drawToTexture:dst_texture];
[buffer2 commit];
id<MTLCommandBuffer> buffer3 = [self drawToTexture:dst_texture];
[buffer3 commit];
The problem is that it seems I can't share the destination texture in the different command buffers - I get glitches, sometimes I can see only partial results on the destination texture...
Inside drawToTexture I use dst_texture this way:
_renderPassDescriptor.colorAttachments[0].texture = dst_texture;
_renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionLoad;
The problem gets fixed when I call [buffer waitUntilCompleted] after each individual commit but I suppose it affects the performance and I would love to have it without blocking/waiting.
This works:
id<MTLTexture> dst_texture = ...;
id<MTLCommandBuffer> buffer1 = [self drawToTexture:dst_texture];
[buffer1 commit];
[buffer1 waitUntilCompleted];
id<MTLCommandBuffer> buffer2 = [self drawToTexture:dst_texture];
[buffer2 commit];
[buffer2 waitUntilCompleted];
id<MTLCommandBuffer> buffer3 = [self drawToTexture:dst_texture];
[buffer3 commit];
[buffer3 waitUntilCompleted];
What else I could do here to avoid waitUntilCompleted calls?
To answer the questions "I am trying to render 3 separate things in one texture in metal", and "What else I could do here to avoid waitUntilCompleted calls?" (Hamid has already explained why the problem occurs), is that you shouldn't be using multiple command buffers for basic rendering with multiple draw calls. If you're rendering to one texture then you need one command buffer which you use to create one renderPassDescriptor that you attach the texture to. Then you need one encoder created from the renderPassDescriptor where you can encode all the draw calls and change of buffers states etc..so as I said in the comment you bind the shader set buffers etc then draw and then don't call endEncoding but set shaders and buffers again and again for how many draw calls and buffer changes you want.
If you wanted to draw to multiple textures then you typically create multiple renderPassDescriptors (but still use one command buffer). Generally you use one command buffer per frame, or for a set of offscreen render passes.
The manual synchronization is only required:
For Untracked Resources.
Across Multiple Devices.
Between a GPU and the CPU.
Between separate command queues.
otherwise, metal automatically synchronizes resources (tracked) between the command buffers even if they are running in parallel.
If a command buffer includes write or read operations on a given MTLTexture, you must ensure that these operations complete before reading or writing the MTLTexture contents. You can use the addCompletedHandler: method, waitUntilCompleted method, or custom semaphores to signal that a command buffer has completed execution.
I am having problem with NSOperationQueue, if I am adding the same operation for 200 times method is behaving as expected.
But if I increase the for loop to 500 times, parameter are becoming empty when queue will start executing the task. Below is the code snippet.
- (void)someMethod:(char *)param1 {
NSBlockOperation *theOp = [NSBlockOperation blockOperationWithBlock: ^{
// use this paramter and do something
}];
[[MyQueueService sharedMyQueueService] operationQueue] addOperation:theOp];
}
This is how I am invoking the above method
for (int index = 1; index < 500; index++) {
MyClass *classInstance = [[MyClass alloc] init];
NSString *parm1 = [NSString stringWithFormat:#"%d", index];
[classInstance someMethod:(char *)[string cStringUsingEncoding:NSUTF8StringEncoding]];
}
Here is becoming empty i.e "", if i run the same method for 500 time, due to this I am unable to perform other operation. Please help me regarding this.
The problem is not with NSOperationQueue. The issue is the use of char *. As the documentation for cStringUsingEncoding says:
The returned C string is guaranteed to be valid only until either the receiver is freed, or until the current memory is emptied, whichever occurs first. You should copy the C string or use getCString:maxLength:encoding: if it needs to store the C string beyond this time.
Bottom line, simple C pointers like char * do not participate in (automatic) reference counting. Your code is using dangling pointers to unmanaged buffer pointers. This is exceedingly dangerous and when not done properly (as in this case), will lead to undefined behavior. The resulting behavior is dictated whether the memory in question happened to be reused for other purposes in the intervening time, which can lead to unpredictable behavior that changes based upon completely unrelated factors (e.g. the loop count or whatever).
You should try running your app with the "address sanitizer" turned on (found in the scheme settings under the "run" settings, on the diagnostics tab) and it will likely report some of these issues. E.g. when I ran your code with address sanitizer, it reported:
==15249==ERROR: AddressSanitizer: heap-use-after-free on address 0x60300003a458 at pc 0x00010ba3bde6 bp 0x70000e837880 sp 0x70000e837028
For more information, see Address Sanitizer documentation or its introductory video.
The easiest solution is going to be to eliminate the char * and instead use the int index value or use an object, such as NSString *.
I generate bitmaps using the next simplified for the sake of simplicity code:
for (int frameIndex = 0; frameIndex < 90; frameIndex++) {
UIGraphicsBeginImageContextWithOptions(CGSizeMake(130, 130));
// Making some rendering on the context.
// Save the current snapshot from the context.
UIImage *snapshot = UIGraphicsGetImageFromCurrentImageContext();
[self.snapshots addObject:snapshot];
UIGraphicsEndImageContext();
}
So nothing non-trivial but everything gets complicated when an operating system gives about 30 MB of memory for everything (in this particular case it is watch OS 2 but nevertheless it is not the OS-dependent question) and by exceeding the quota, the OS just kills the application's process.
The next graph from the Allocations Instrument illustrates the question:
It is the same graph but with different annotations of memory consumption - before, at the moment and after the aforementioned code execution. As it can be seen about 5.7 MB of bitmaps have been generated eventually and it is the absolutely acceptable result. What is not acceptable it is memory consumption (44.6 MB) at the peak of the graph - all of this memory is eaten by CoreUI: image data. Given the fact that the action takes place in a background thread the time of execution is not that important.
So the questions: What is the right approach to decreasing memory consumption (maybe by increasing the execution time) to fit the memory quota and why the memory consumption is increased despite UIGraphicsEndImageContext is called?
Update 1:
I think splitting the whole operation by using NSOperation, NSTimer etc. will do the trick but still trying to come up with some synchronous solution.
Tried to gather all answers together and tested the next piece of code:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(130, 130));
for (int frameIndex = 0; frameIndex < 45; frameIndex++) {
// Making some rendering on the context.
#autoreleasepool {
UIImage *snapshot = UIGraphicsGetImageFromCurrentImageContext();
[self.snapshots addObject:snapshot];
}
CGContextClearRect(UIGraphicsGetCurrentContext(), CGSizeMake(130, 130));
}
for (int frameIndex = 0; frameIndex < 45; frameIndex++) {
// Making some rendering on the context.
#autoreleasepool {
UIImage *snapshot = UIGraphicsGetImageFromCurrentImageContext();
[self.snapshots addObject:snapshot];
}
CGContextClearRect(UIGraphicsGetCurrentContext(), CGSizeMake(130, 130));
}
UIGraphicsEndImageContext();
What has changed:
Split 90 iterations into 2 parts of 45.
Moved a graphic context outside and clear it after each iteration instead of creating new one.
Wrapped taking and storing snapshots in the autorelease pool.
As a result - nothing changed, memory consumption remains on the same level.
Also, if remove taking and storing a snapshot at all it will decrease memory consumption only for 4 MB i.e. less than 10%.
Update 2:
Doing rendering by a timer every 3 seconds generates the next graph:
As you see memory is not freed (to be precise - not fully) even if rendering is divided by time intervals. Something tells me that memory is not freed until the object that performs rendering exists.
Update 3:
The problem has been solved by combining 3 approaches:
Splitting the whole rendering task into subtasks. For example, 90 drawings are split into 6 subtasks by 15 drawings in each (The number of 15 was found empirically).
Executing all subtasks serially using dispatch_after with the small interval after each (0.05s in my case).
And the last and the most important. To avoid the memory leak like on the last graph - each subtask should be executed in a context of a new object. For example:
self.snapshots = [[SnaphotRender new] renderSnapshotsInRange:[0, 15]];
Thanks to everyone for answering but #EmilioPelaez was closest to the right answer.
Corrected the to the updated question frame count.
The total byte size of the images could be 130 * 130 * 4 (bytes per pixel) * 90 = ~6MB.
Not to sure about the watch but temporary memory might be building up, you could try wrapping the snap shot code in an #autoreleasepool block:
#autoreleasepool {
UIImage *snapshot = UIGraphicsGetImageFromCurrentImageContext();
[self.snapshots addObject:snapshot];
}
I think your problem is that you are doing all the snapshots in the same context (the context the for loop is in). I believe the memory is not being released until the context ends, which is when the graph goes down.
I would suggest you reduce the scope of the context, so instead of using a for loop to draw all frames you would keep track of the progress with some iVars and draw just one frame; whenever you finish rendering the frame, you can call the function again with a dispatch_after and modify the variables. Even if the delay is 0, it will allow the context to end and clean up the memory that is no longer being used.
PS. When I mean context I don't mean a graphics context, I mean a certain scope in your code.
wait, image resolution plays a big role, for example, a one mega byte jpeg image of the dimensions 5000 px * 3000 px would consume ram of size of 60 MB, 5000*3000*4 bytes are 60 MB, images get decompressed into the ram, so let's troubleshoot, so first, please tell us what kind of image sizes (dimensions) do you use ?
Edit (after clarifications):
I think, that the proper way should be not to store UIImage objects directly, but compressed NSData objects (i.e. using
UIImageJPEGRepresentation). And then, when need it, convert them again to UIImage objects.
However, if you use many of them simultaneously, you're going to run out memory quite rapidly.
Original answer:
Actually the total size can be higher than 10MB (probably > 20MB) depending on the scaling factor (as seen here). Note that the UIGraphicsBeginImageContextWithOptions requires a scale parameter in contrast to the UIGraphicsBeginImageContext. So my guess it's this is somehow related to a screenshot isn't it?
Also the method UIGraphicsGetImageFromCurrentImageContext it's thread safe, so it might be returning a copy or using more memory. Then you have your 40MB.
This answer states that iOS somehow stores the images compressed when not displayed. This can be the reason to the 6MB of usage afterwards.
My final guess is that the device is detecting a peak of memory and then tries to save memory somehow. Since the images are not being used, it compressed them internally and recycles memory.
So I wouldn't worry because it looks like it's taking care of it by itself. But then you can do as others have suggested, save the image into a file and don't keep it in memory if you're not going to use it.
Creating an CCNode, setting it to my player's position- in debug draw I see the physics object, but the sprite is invisible or nil or something. It doesn't crash the sprite simply doesn't appear. The bomb also travels the proper path and it's selector method is called.
Does NOT Appear:
GameObject *bomb = [_useBombArray nextSprite];
bomb.tag = kShipMissile;
[bomb stopAllActions];
NSLog(#"_bombSpawnPoint: %.0f, %.0f", _bombSpawnPoint.x, _bombSpawnPoint.y);
bomb.position = _bombSpawnPoint;
I have gotten it to appear by doing this:
GameObject *bomb = [_useBombArray nextSprite];
bomb.tag = kShipMissile;
[bomb stopAllActions];
bomb.position = ccp(_winSize.width * 0.5, _winSize.width * 0.5);
The _bombSpawnPoint is set prior to this and I do receive proper results on output.
Originally I thought I had called to create the object at an inopportune time in the update. So I changed the function slightly, to be sure it is called in proper order in the update method.
Not sure what's causing this! Please help!
I've created all my objects like this and they've all worked perfectly thus far!
The result of this was caused by the Texture of the Bomb was not in the proper BatchNode.
The error was not triggering until I removed the excess subclasses and used solely the sprite.
The error received was: 'CCSprite is not using the same texture id'
Once I used the other batch node everything worked perfect. Hope this helps someone!
Apple recently added a new constant to the CIDetector class called CIDetectorTracking which appears to be able to track faces between frames in a video. This would be very beneficial for me if I could manage to figure out how it works..
I've tried adding this key to the detectors options dictionary using every object I can think of that is remotely relevant including, my AVCaptureStillImageOutput instance, the UIImage I'm working on, YES, 1, etc.
NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy,myAVCaptureStillImageOutput,CIDetectorTracking, nil];
But no matter what parameter I try to pass, it either crashes (obviously I'm guessing at it here) or the debugger outputs:
Unknown CIDetectorTracking specified. Ignoring.
Normally, I wouldn't be guessing at this, but resources on this topic are virtually nonexistent. Apple's class reference states:
A key used to enable or disable face tracking for the detector. Use
this option when you want to track faces across frames in a video.
Other than availability being iOS 6+ and OS X 10.8+ that's it.
Comments inside CIDetector.h:
/*The key in the options dictionary used to specify that feature
tracking should be used. */
If that wasn't bad enough, a Google search provides 7 results (8 when they find this post) all of which are either Apple class references, API diffs, a SO post asking how to achieve this in iOS 5, or 3rd party copies of the former.
All that being said, any hints or tips for the proper usage of CIDetectorTracking would be greatly appreciated!
You're right, this key is not very well documented. Beside the API docs it is also not explained in:
the CIDetector.h header file
the Core Image Programming Guide
the WWDC 2012 Session "520 - What's New in Camera Capture"
the sample code to this session (StacheCam 2)
I tried different values for CIDetectorTracking and the only accepted values seem to be #(YES) and #(NO). With other values it prints this message in the console:
Unknown CIDetectorTracking specified. Ignoring.
When you set the value to #(YES) you should get tracking id's with the detected face features.
However when you want to detect faces in content captured from the camera you should prefer the face detection API in AVFoundation. It has face tracking built in and the face detection happens in the background on the GPU and will be much faster than CoreImage face detection
It requires iOS 6 and at least an iPhone 4S or iPad 2.
The face are sent as metadata objects (AVMetadataFaceObject) to the AVCaptureMetadataOutputObjectsDelegate.
You can use this code (taken from StacheCam 2 and the slides of the WWDC session mentioned above) to setup face detection and get face metadata objects:
- (void) setupAVFoundationFaceDetection
{
self.metadataOutput = [AVCaptureMetadataOutput new];
if ( ! [self.session canAddOutput:self.metadataOutput] ) {
return;
}
// Metadata processing will be fast, and mostly updating UI which should be done on the main thread
// So just use the main dispatch queue instead of creating a separate one
// (compare this to the expensive CoreImage face detection, done on a separate queue)
[self.metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[self.session addOutput:self.metadataOutput];
if ( ! [self.metadataOutput.availableMetadataObjectTypes containsObject:AVMetadataObjectTypeFace] ) {
// face detection isn't supported (via AV Foundation), fall back to CoreImage
return;
}
// We only want faces, if we don't set this we would detect everything available
// (some objects may be expensive to detect, so best form is to select only what you need)
self.metadataOutput.metadataObjectTypes = #[ AVMetadataObjectTypeFace ];
}
// AVCaptureMetadataOutputObjectsDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputMetadataObjects:(NSArray *)metadataObjects
fromConnection:(AVCaptureConnection *)c
{
for ( AVMetadataObject *object in metadataObjects ) {
if ( [[object type] isEqual:AVMetadataObjectTypeFace] ) {
AVMetadataFaceObject* face = (AVMetadataFaceObject*)object;
CMTime timestamp = [face time];
CGRect faceRectangle = [face bounds];
NSInteger faceID = [face faceID];
CGFloat rollAngle = [face rollAngle];
CGFloat yawAngle = [face yawAngle];
NSNumber* faceID = #(face.faceID); // use this id for tracking
// Do interesting things with this face
}
}
If you want to display the face frames in the preview layer you need to get the transformed face object:
AVMetadataFaceObject * adjusted = (AVMetadataFaceObject*)[self.previewLayer transformedMetadataObjectForMetadataObject:face];
For details check out the sample code from WWDC 2012.