I'm attempting to convert CVPixelBufferRefs from a video source to CGImageRefs using vImage convert libraries on 10.10. This for the most part works fine. However, each time I initialize a new vImage_Buffer from my CVPixelBufferRef, memory is gobbled up that is never returned.
Here is a simplified version of the conversion, that should ideally use no memory at the end of the day:
CVPixelBufferRef pixelBuffer = ...; // retained CVPixelBufferRef from somewhere else
vImage_Buffer buffer;
vImage_CGImageFormat format = {.bitsPerComponent = 8, .bitsPerPixel = 32, .colorSpace = NULL, .bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little, .version = 0, .decode = NULL, .renderingIntent = kCGRenderingIntentAbsoluteColorimetric};
vImage_Error imageError = vImageBuffer_InitWithCVPixelBuffer(&buffer, &format, pixelBuffer, NULL, NULL, kvImagePrintDiagnosticsToConsole);
// Do conversion here
free(buffer.data);
Commenting out the last two lines (the init and free) effectively uses no more memory than what I started with. With the two lines there, however, 6 MB are being consumed each time.
If I only comment out the free, even more memory is being consumed, so the free is definitely doing something, but I can only assume vImageBuffer_InitWithCVPixelBuffer is using more memory than it is supposed to. Has anyone else seen this?
For completion, here is the whole conversion method to go from CVPixelBufferRef to NSImage:
CVPixelBufferRef pixelBuffer = ...; // retained CVPixelBufferRef from somewhere else
NSImage *image = nil;
vImage_Buffer buffer;
vImage_CGImageFormat format = {.bitsPerComponent = 8, .bitsPerPixel = 32, .colorSpace = NULL, .bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little, .version = 0, .decode = NULL, .renderingIntent = kCGRenderingIntentAbsoluteColorimetric};
vImage_Error imageError = vImageBuffer_InitWithCVPixelBuffer(&buffer, &format, pixelBuffer, NULL, NULL, kvImagePrintDiagnosticsToConsole);
if (imageError != 0) {
NSLog(#"vImageBuffer_InitWithCVPixelBuffer Error: %zd", imageError);
} else {
CGImageRef imageRef = vImageCreateCGImageFromBuffer(&buffer, &format, NULL, NULL, kvImagePrintDiagnosticsToConsole|kvImageHighQualityResampling, &imageError);
if (!imageRef) {
NSLog(#"vImageCreateCGImageFromBuffer Error: %zd", imageError);
} else {
image = [[NSImage alloc] initWithCGImage:imageRef size:NSMakeSize(CGImageGetWidth(imageRef), CGImageGetHeight(imageRef))];
CGImageRelease(imageRef);
NSAssert(image != nil, #"Creating the image failed!");
}
}
free(buffer.data);
Related
My code creates an TIFFRepresentation of an image and I want to recode it to something different. This is not problematic.
My ImgUtils function is:
+ (CGImageRef) processImageData:(NSData*)rep {
NSBitmapImageRep *bitmapRep = [NSBitmapImageRep imageRepWithData:rep];
int width = bitmapRep.size.width;
int height = bitmapRep.size.height;
size_t pixels_size = width * height;
Byte raw_bytes[pixels_size * 3];
//
// processing, creates and stores raw byte stream
//
int bitsPerComponent = 8;
int bytesPerPixel = 3;
int bitsPerPixel = bytesPerPixel * bitsPerComponent;
int bytesPerRow = bytesPerPixel * width;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
raw_bytes,
pixels_size * bytesPerPixel,
NULL);
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef imageRef = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
NO,
renderingIntent);
[ImgUtils saveToPng:imageRef withSuffix:#"-ok"];
CGColorSpaceRelease(colorSpaceRef);
CGDataProviderRelease(provider);
return imageRef;
}
There is another method, that saves a CGImageRef to filesystem.
+ (BOOL) saveToPng:(CGImageRef)imageRef withSuffix:(NSString*)suffix {
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:[NSString stringWithFormat:#"~/Downloads/pic%#.png", suffix]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypePNG, 1, NULL);
CGImageDestinationAddImage(destination, imageRef, nil);
CGImageDestinationFinalize(destination);
CFRelease(destination);
return YES;
}
As you can see, immediately after processing the image, I save it on a disk, as pic-ok.png.
Here is the code, that calls the processing function:
CGImageRef cgImage = [ImgUtils processImageData:imageRep];
[ImgUtils saveToPng:cgImage withSuffix:#"-bad"];
The problem is, that the two images differ. Second one, with the -bad suffix is corrupted.
See examples below. Seems like the memory area the CGImageRef pointer is pointing to is released and overwritten immediately after returning from the method.
I tried also return CGImageCreateCopy(imageRef); but it changed nothing.
What am I missing?
CGDataProviderCreateWithData() does not copy the buffer you provide. Its purpose is to allow creation of a data provider that accesses that buffer directly.
Your buffer is created on the stack. It goes invalid after +processImageData: returns. However, the CGImage still refers to the provider and the provider still refers to the now-invalid buffer.
One solution would be to create the buffer on the heap and provide a callback via the releaseData parameter that frees it. Another would be to create a CFData from the buffer (which copies it) and then create the data provider using CGDataProviderCreateWithCFData(). Probably the best would be to create a CFMutableData of the desired capacity, set its length to match, and use its storage (CFDataGetMutableBytePtr()) as your buffer from the beginning. That's heap-allocated, memory-managed, and doesn't require any copying.
I've been tasked to solve a memory leak with a custom objective class for a legacy app that uses garbage collection.
The class takes in NSData from a jpeg file and can thumb the image. There is a method to return a new NSData object with the newly resized image.
ImageThumber * imgt = [ImageThumber withNSData:dataObjectFromJpeg];
[imgt thumbImage:1024];
NSdata * smallImage = [imgt imageData];
[imgt thumbImage:256];
NSdata * extraSmallImage = [imgt imageData];
It does what it's supposed to do but it's been discovered that for every ImageThumber that's created it allocates a ImageIO_jpeg_Data object that's never deallocated. This was found in Instruments.
When created using ImageThumber withNSData:(NSData) it creates a CGImage and stores it in a private CGImageRef variable
I found that if thumbImage:(int) isn't called the ImageIO_jpeg_Data will deallocate when ImageThumber is deallocated which leads me to believe the problem lies somewhere within the thumbImage method. If thumbImage method is called multiple times it doesn't create extra ImageIO_jpeg_Data.
I have little experience with core graphics and garbage collection.
+(id)SAMImageDataWithNSData:(NSData *)data
{
SAMImageData * new = [[[self alloc] init] autorelease];
new.imageData = [NSMutableData dataWithData:data];
CFDataRef imgData = (CFDataRef)data;
CGDataProviderRef imgDataProvider;
imgDataProvider = CGDataProviderCreateWithCFData(imgData);
new->_cgImageRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(imgDataProvider);
int width = (int)CGImageGetWidth(new->_cgImageRef);
int height = (int)CGImageGetHeight(new->_cgImageRef);
new.originalSize = NSMakeSize(width, height);
return new;
}
-(void)thumbImage:(int)length
{
/* simple logic to calculate new width and height */
//If the next line is commented out the problem doesn't exist.
//You just don't get the image resize.
[self resizeCGImageToWidth:newSize.width andHeight:newSize.height];
CFMutableDataRef workingData = (CFMutableDataRef)[[NSMutableData alloc] initWithCapacity:0];
CGImageDestinationRef dest;
dest = CGImageDestinationCreateWithData(workingData,kUTTypeJPEG,1,NULL);
CGImageDestinationAddImage(dest,_cgImageRef,NULL);
CGImageDestinationFinalize(dest);
CFRelease(dest);
self.imageData = (NSMutableData *)workingData;
}
This where I believe the problem exists:
- (void)resizeCGImageToWidth:(int)width andHeight:(int)height {
CGColorSpaceRef colorspace = CGImageGetColorSpace(_cgImageRef);
CGContextRef context = CGBitmapContextCreate(NULL, width, height,
CGImageGetBitsPerComponent(_cgImageRef),
CGImageGetBytesPerRow(_cgImageRef),
colorspace,
kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
if(context == NULL)
return;
CGContextDrawImage(context, CGRectMake(0, 0, width, height), _cgImageRef);
_cgImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
}
I am trying to read the ARGB pixel data from a png image asset in my ios App.
I am using CGDataProvider to get a CFDataRef as described here:
http://developer.apple.com/library/ios/#qa/qa1509/_index.html
It works perfectly the first time I use it on a certain image. But the second time I use it on THE SAME image, it returns a length 0 CFDataRef.
Maybe I am not releasing something? Why would it do that?
- (GLuint)initWithCGImage:(CGImageRef)newImageSource
{
CGDataProviderRef dataProvider;
CFDataRef dataRef;
GLuint t;
#try {
// NSLog(#"initWithCGImage");
// report_memory2();
CGFloat widthOfImage = CGImageGetWidth(newImageSource);
CGFloat heightOfImage = CGImageGetHeight(newImageSource);
// pixelSizeOfImage = CGSizeMake(widthOfImage, heightOfImage);
// CGSize pixelSizeToUseForTexture = pixelSizeOfImage;
// CGSize scaledImageSizeToFitOnGPU = [GPUImageOpenGLESContext sizeThatFitsWithinATextureForSize:pixelSizeOfImage];
GLubyte *imageData = NULL;
//CFDataRef dataFromImageDataProvider;
// stbi stbiClass;
int x;
int y;
int comp;
dataProvider = CGImageGetDataProvider(newImageSource);
dataRef = CGDataProviderCopyData(dataProvider);
const unsigned char * bytesRef = CFDataGetBytePtr(dataRef);
// NSUInteger length = CFDataGetLength(dataRef);
//CGDataProviderRelease(dataProvider);
//dataProvider = nil;
/*
UIImage *tmpImage = [UIImage imageWithCGImage:newImageSource];
NSData *data2 = UIImagePNGRepresentation(tmpImage);
// if (data2==NULL)
// data2 = UIImageJPEGRepresentation(tmpImage, 1);
unsigned char *bytes = (unsigned char *)[data2 bytes];
NSUInteger length = [data2 length];*/
// stbiClass.img_buffer = bytes;
// stbiClass.buflen = length;
// stbiClass.img_buffer_original = bytes;
// stbiClass.img_buffer_end = bytes + length;
// unsigned char *data = stbi_load_main(&stbiClass, &x, &y, &comp, 0);
//unsigned char * data = bytesRef;
x = widthOfImage;
y = heightOfImage;
comp = CGImageGetBitsPerPixel(newImageSource)/8;
int textureWidth = [self CalcPow2: x];
int textureHeight = [self CalcPow2: y];
unsigned char *scaledData = [self scaleImageWithParams:#{#"x":#(x), #"y":#(y), #"comp":#(comp), #"targetX":#(textureWidth), #"targetY":#(textureHeight)} andData:(unsigned char *)bytesRef];
//CFRelease (dataRef);
// dataRef = nil;
// free (data);
glGenTextures(1, &t);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, t);
GLint format = (comp > 3) ? GL_RGBA : GL_RGB;
imageData = scaledData;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, format, textureWidth, textureHeight, 0, format, GL_UNSIGNED_BYTE, imageData);
//GLenum err = glGetError();
}
#finally
{
CGDataProviderRelease(dataProvider);
// CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(dataRef);
}
return t;
}
The second time this is called on a CGImageRef that originate from a [UIimage imageNamed: Path] with the same Path as the first time, I get a dataRef of length 0.
It works the first time though.
I have found one big issue with the code I posted and fixed it.
First of all, I was getting crashs even if I didn't load the same image twice, but rather more images. Since the issue is related to memory it failed in all sort of weird ways.
The issue with the code is that I am calling: "CGDataProviderRelease(dataProvider);"
I am using the data provider of newImageSource, but I didn't create this dataprovider. That is why I shouldn't release it.
You need to release things only if you created, retained or copied them.
Apart from that my App crash sometime due to low memory, but after fixing this I was able to use the "economy" type where I allocate and release as soon as possible.
Currently I can't see anything else wrong with this specific code.
Previously I read audio samples from a complete audio file using CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer. Right now I would like to do the same using ranges (ie i specify the range in time.. read a small chunk of audio as per the time, and then go back and read again). The reason why I want to use time range is b/c I want to control the size of each read (to fit in a packet with a max size).
for some reason, there is always a bump between each read. In my code you'll notice that I start the AVAssetReader and end it every time I set a time range, and that's b/c I cannot dynamically adjust the time range after the reader has started (see here for more details).
Could it be that starting and ending a reader is just too expensive to produce a continuous real time experience? Or are there other ways of doing this that I'm not aware of?
Also note that this jitter or lag happens at whatever point I set the time interval to be.. which makes me believe that starting and ending a reader the way I am is too expensive for real time audio playback.
- (void) setupReader
{
NSURL *assetURL = [NSURL URLWithString:#"ipod-library://item/item.m4a?id=1053020204400037178"];
songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
track = [songAsset.tracks objectAtIndex:0];
nativeTrackASBD = [self getTrackNativeSettings:track];
// set CM time parameters
assetCMTime = songAsset.duration;
CMTimeReadDurationInSeconds = CMTimeMakeWithSeconds(1, assetCMTime.timescale);
currentCMTime = CMTimeMake(0,assetCMTime.timescale);
}
-(void)readVBRPackets
{
// make sure assetCMTime is greater than currentCMTime
while (CMTimeCompare(assetCMTime,currentCMTime) == 1 )
{
NSError * error = nil;
reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track
outputSettings:nil];
[reader addOutput:readerOutput];
reader.timeRange = CMTimeRangeMake(currentCMTime, CMTimeReadDurationInSeconds);
[reader startReading];
while ((sample = [readerOutput copyNextSampleBuffer])) {
CMItemCount numSamples = CMSampleBufferGetNumSamples(sample);
if (numSamples == 0) {
continue;
}
NSLog(#"reading sample");
CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample );
AudioBufferList audioBufferList;
OSStatus err = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sample,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&CMBuffer
);
const AudioStreamPacketDescription * inPacketDescriptions;
size_t packetDescriptionsSizeOut;
size_t inNumberPackets;
CheckError(CMSampleBufferGetAudioStreamPacketDescriptionsPtr(sample,
&inPacketDescriptions,
&packetDescriptionsSizeOut),
"could not read sample packet descriptions");
inNumberPackets = packetDescriptionsSizeOut/sizeof(AudioStreamPacketDescription);
AudioBuffer audioBuffer = audioBufferList.mBuffers[0];
for (int i = 0; i < inNumberPackets; ++i)
{
SInt64 dataOffset = inPacketDescriptions[i].mStartOffset;
UInt32 packetSize = inPacketDescriptions[i].mDataByteSize;
size_t packetSpaceRemaining;
packetSpaceRemaining = bufferByteSize - bytesFilled;
// if the space remaining in the buffer is not
// enough for the data contained in this packet
// then just write it
if (packetSpaceRemaining < packetSize)
{
[self enqueueBuffer];
}
// copy data to the audio queue buffer
AudioQueueBufferRef fillBuf = audioQueueBuffers[fillBufferIndex];
memcpy((char*)fillBuf->mAudioData + bytesFilled,
(const char*)(audioBuffer.mData + dataOffset), packetSize);
// fill out packet description
packetDescs[packetsFilled] = inPacketDescriptions[i];
packetDescs[packetsFilled].mStartOffset = bytesFilled;
bytesFilled += packetSize;
packetsFilled += 1;
// if this is the last packet, then ship it
size_t packetsDescsRemaining = kAQMaxPacketDescs - packetsFilled;
if (packetsDescsRemaining == 0) {
[self enqueueBuffer];
}
}
CFRelease(CMBuffer);
CMSampleBufferInvalidate(sample);
CFRelease(sample);
}
[reader cancelReading];
reader = NULL;
readerOutput = NULL;
currentCMTime = CMTimeAdd(currentCMTime, CMTimeReadDurationInSeconds);
}
}
I know what happens :-D It took me near a whole day to figure it out.
In fact AVAssetReader fades the first 1024 samples (maybe a little more) in. That's why you hear the jitter effect.
I fixed it by reading 1024 samples before the position I really want to read, then skip that 1024 samples.
I hope it'll work for you also.
I am trying to create a CVPixelBuffer to allocate a bitmap in it and bind it to an OpenGL texture under IOS 5, but I having some problems on it. I can generate the pixel buffer but the IOSurface is always null, so I can not use it with CVOpenGLESTextureCacheCreateTextureFromImage. Since I am not using a Camera or a Video, but an self generated bitmap I can not use CVSampleBufferRef to get the pixel buffer from it.
I let you my code in case some of you knows how to solve this issue.
CVPixelBufferRef renderTarget = nil;
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault,
2,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
CFDictionarySetValue(attrs, kCVPixelBufferOpenGLCompatibilityKey, [NSNumber numberWithBool:YES]);
CVReturn err = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, tmpSize.width, tmpSize.height, kCVPixelFormatType_32ARGB, imageData, tmpSize.width*4, 0, 0, attrs, &renderTarget);
If I dump the new PixelBufferRef I get:
1 : <CFString 0x3e43024c [0x3f8f8650]>{contents = "OpenGLCompatibility"} = <CFBoolean 0x3f8f8a10 [0x3f8f8650]>{value = true}
2 : <CFString 0x3e43026c [0x3f8f8650]>{contents = "IOSurfaceProperties"} = <CFBasicHash 0xa655750 [0x3f8f8650]>{type = immutable dict, count = 0,
entries =>
}
}
And I get an -6683 error (kCVReturnPixelBufferNotOpenGLCompatible) if I try to use it with a TextureCache, this way:
CVOpenGLESTextureCacheCreateTextureFromImage (
kCFAllocatorDefault,
self.eagViewController.textureCache,
renderTarget,
nil, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
tmpSize.width,
tmpSize.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTextureTemp);