CGImage recording distortion - objective-c

I've written a helper class to record a NSView and save it to a QuickTime file. The view is recorded fine to a QuickTime movie but the output is skewed for some reason. The core of my class is below, and the output is this:
- (void) captureImage
{
[self getCGImageFromView];
pixelBuffer = [self getPixelBufferFromCGImage:viewCGImage size:CGRectMake(0, 0, mViewRect.size.width, mViewRect.size.height).size];
if(pixelBuffer) {
if(![adapter appendPixelBuffer:pixelBuffer withPresentationTime:CMTimeMake(mCurrentFrame, 20)])
NSLog(#"AVAssetWriterInputPixelBufferAdaptor: Failed to append pixel buffer.");
CFRelease(pixelBuffer);
mCurrentFrame++;
}
}
- (void) getCGImageFromView
{
viewBitmapImageRep = [currentView bitmapImageRepForCachingDisplayInRect:mViewRect];
[currentView cacheDisplayInRect:mViewRect toBitmapImageRep:viewBitmapImageRep];
viewBitmapFormat = [viewBitmapImageRep bitmapFormat];
viewCGImage = [viewBitmapImageRep CGImage];
}
- (CVPixelBufferRef)getPixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferOpenGLCompatibilityKey,
[NSNumber numberWithInt:size.width], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:size.height], kCVPixelBufferHeightKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(NULL, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
[[currentView layer] renderInContext:context];
//CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}

The problem with this was this line.
4*size.width,
The size.width was an odd number, not a multiple of 4. The values passed into CGBitmapContextCreate need to be a multiple of 4.
4*ceil(size.width / 4) * 4
That solved the problem.

Related

Memory leak after call CGContextDrawImage

I've read all CGContextDrawImage memory peak answers, I tried but none of the solutions work yet.
Here is my code, after call CGContextDrawImage, memory increase 10-40MB. I'm using this code to generate a video from images, if I have 200 images, the app crash.
Loop through images:
for(UIImage * img in imageArray)
{
buffer = [self pixelBufferFromCGImage:[img CGImage] ImageSize:img.size];
BOOL append_ok = NO;
int j = 0;
while (!append_ok) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
//print out status:
NSLog(#"Processing video frame (%d,%lu)",frameCount,(unsigned long)[imageArray count]);
CMTime frameTime = CMTimeMake(frameCount*frameDuration,(int32_t) fps);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok){
NSError *error = videoWriter.error;
if(error!=nil) {
NSLog(#"Unresolved error %#,%#.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
frameCount++;
}
Here is pixelBufferFromCGImage function
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image ImageSize:(CGSize)size {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess){
NSLog(#"Failed to create pixel buffer");
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaPremultipliedFirst);
//kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}

Screenshot code working perfectly in Simulator but not in iOS Device

This is my code for screenshot to save in library or do email. The problem is, it is working perfectly in Simulator but when i run this code in Device whatever on screen it only gets white image. I have tried with iOS 5 and iOS 6 both but no luck
What should be the reason Where i am wrong
NSInteger myDataLength = 320 * 430 * 4;
GLubyte *buffer1 = (GLubyte *) malloc(myDataLength);
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
//Read image memory form OpenGL
glReadPixels(0, 50, 320, 430, GL_RGBA, GL_UNSIGNED_BYTE, buffer1);
//Invert result image buffer into secondary buffer
for(int y = 0; y < 430; y++) {
for(int x = 0; x < 320 * 4; x++) {
buffer2[(429 - y) * 320 * 4 + x] = buffer1[y * 4 * 320 + x];
}
}
//Create bitmap context
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef destContext = CGBitmapContextCreate(buffer2, 320, 430, 8, 320 * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
//Get image from context
CGImageRef resultContext = CGBitmapContextCreateImage(destContext);
UIImage *resultImg = [UIImage imageWithCGImage: resultContext];
CGImageRelease(resultContext);
//Send to mail or Save to PhotoLibrary
if (sendMail) {
[self emailImage: resultImg];
} else {
UIImageWriteToSavedPhotosAlbum(resultImg, nil, nil, nil);
}
//Release allocated memory
// [resultImg release];
free(buffer2);
free(buffer1);
CGContextRelease(destContext);
CGColorSpaceRelease(colorSpace);
is there any class i am using which is supported by Simulator and not the device If it so then which one because as far as i search i found nothing like that.
I have found that My above code is perfect and the issue is not lie here.actually the problem is with
eaglLayer.drawableProperties
I have changed from this code
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO],
kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGB565,
kEAGLDrawablePropertyColorFormat, nil];
To this
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGB565,
kEAGLDrawablePropertyColorFormat, nil];
I just set
kEAGLDrawablePropertyRetainedBacking = YES
And thanks GOD it is working fine.
but dont know why if any one know please let me know
try this one, it works perfect for me on the device:
+ (UIImage *)imageFromView:(UIView *)view inRect:(CGRect)rect {
CGFloat screenScale = [[UIScreen mainScreen] scale];
if ([view respondsToSelector:#selector(contentSize)]) {
UIScrollView *scrollView = (UIScrollView *)view;
UIGraphicsBeginImageContextWithOptions(scrollView.contentSize, NO, screenScale);
} else {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, screenScale);
}
CGContextRef resizedContext = UIGraphicsGetCurrentContext();
[view.layer renderInContext:resizedContext];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
image = [UIImage imageWithCGImage:CGImageCreateWithImageInRect(image.CGImage, CGRectMake(rect.origin.x*screenScale, rect.origin.y*screenScale, rect.size.width*screenScale, rect.size.height*screenScale))];
UIGraphicsEndImageContext();
return [image imageScaledToSize:CGSizeMake(rect.size.width, rect.size.height)];
}
- (UIImage *)imageScaledToSize:(CGSize)newSize {
if ((self.size.width == newSize.width) && (self.size.height == newSize.height)) {
return self;
}
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[self drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}

Export UIImages to MP4

Ok so I am in a bind and I have been staring at this for too long to think clearly.
Basically, all I need is a method that takes in an NSArray of images and writes an mp4 or other video format to disk.
I am not worried about monitoring the user's gesture or anything like that. The images already exist on disk -- I just need to do the conversion.
I understand that I need an AVWriter and something like:
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
// CGAffineTransform flipVertical = CGAffineTransformMake(
// 1, 0, 0, -1, 0, CGImageGetHeight(image)
// );
// CGContextConcatCTM(context, flipVertical);
// CGAffineTransform flipHorizontal = CGAffineTransformMake(
// -1.0, 0.0, 0.0, 1.0, CGImageGetWidth(image), 0.0
// );
//
// CGContextConcatCTM(context, flipHorizontal);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I just can't see how to put it all into one clean helper. And before you ask I have seen: How to display video from images and the like, but just running the images in a UIImageView is not what I am looking for here. This needs to be App Store legal, so FFMPeg is not an option.
Any thoughts / guidance?
https://github.com/meslater/MSImageMovieEncoder encodes images to H.264.
You have to provide images as context refs or CVPixelBufferRefs (see sample code here) implementing
-(BOOL)nextFrameInCVPixelBuffer:(CVPixelBufferRef*)pixelBuf;
Each call to this method should return a buffer and YES. If there are no more images return NO, and you'll find the movie on disk.
For more details, read How do I export UIImage array as a movie? (it's basically the same code) or read the github files.

How can I save the image from IKImageView?

Okay... I want to save the visible rectangle of the IKImageView image only.
My problem is that, somehow, if I had an image in portrait mode it will be not saved in the right orientation. I drawing the image with this code:
[sourceImg drawInRect:targetRect fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0f];
On the other side if I draw the image with this code:
[sourceImg drawAtPoint:NSZeroPoint fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0f];
It will be in the right orientation, but edits like zooming are lost. I mean it will save the right cut out, but unfortunately not in the right size.
Here is the code how I load the image:
- (IBAction)imageSelectionButtonAction:(id)sender {
NSLog(#"%s", __FUNCTION__);
NSOpenPanel *panel = [NSOpenPanel openPanel];
id model = [self getModel];
if (imageView) {
[panel setAllowedFileTypes:[NSImage imageFileTypes]];
[panel beginSheetModalForWindow:[[NSApplication sharedApplication] mainWindow] completionHandler:^(NSInteger returnCode) {
if (returnCode == 1) {
NSURL *imageUrl = [panel URL];
CGImageRef image = NULL;
CGImageSourceRef isr = CGImageSourceCreateWithURL( (__bridge CFURLRef)imageUrl, NULL);
if (isr) {
NSDictionary *options = [NSDictionary dictionaryWithObject: (id)kCFBooleanTrue forKey: (id) kCGImageSourceShouldCache];
image = CGImageSourceCreateImageAtIndex(isr, 0, (__bridge CFDictionaryRef)options);
if (image) {
_imageProperties = (__bridge_transfer NSDictionary*)CGImageSourceCopyPropertiesAtIndex(isr, 0, (__bridge CFDictionaryRef)_imageProperties);
_imageUTType = (__bridge NSString*)CGImageSourceGetType(isr);
}
CFRelease(isr);
}
if (image) {
[imageView setImage:image imageProperties:_imageProperties];
CGImageRelease(image);
}
[[model saveTempMutabDict] setValue:[imageUrl absoluteString] forKey:#"tempImage"];
}
}];
return;
}
}
And here is the code how I save it:
- (void)saveImage:(NSString *)path {
// get the current image from the image view
CGImageRef sourceImgRef = [imageView image];
NSRect targetRect = [imageView visibleRect];
NSImage *sourceImg = [[NSImage alloc] initWithCGImage:sourceImgRef size:NSZeroSize];
NSMutableDictionary *thisTiffDict = [_imageProperties valueForKey:#"{TIFF}"];
NSInteger theOrientation = [[thisTiffDict valueForKey:#"Orientation"] integerValue];
NSImage *targetImg = nil;
if (theOrientation == 6) {
targetImg = [[NSImage alloc] initWithSize:NSMakeSize([imageView frame].size.height, [imageView frame].size.width)];
} else {
targetImg = [[NSImage alloc] initWithSize:NSMakeSize([imageView frame].size.width, [imageView frame].size.height)];
}
NSRect sourceRect = [imageView convertViewRectToImageRect:targetRect];
[targetImg lockFocus];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationHigh];
[sourceImg drawAtPoint:NSZeroPoint fromRect:sourceRect operation:NSCompositeSourceOver fraction:1.0f];
[targetImg unlockFocus];
_saveOptions = [[IKSaveOptions alloc] initWithImageProperties:_imageProperties imageUTType: _imageUTType];
NSString * newUTType = [_saveOptions imageUTType];
CGImageRef targetImgRef = [targetImg CGImageForProposedRect:NULL context:[NSGraphicsContext currentContext] hints:nil];
if (targetImgRef) {
NSURL *url = [NSURL fileURLWithPath: path];
CGImageDestinationRef dest = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, (__bridge CFStringRef)newUTType, 1, NULL);
if (dest) {
CGImageDestinationAddImage(dest, targetImgRef, (__bridge CFDictionaryRef)[_saveOptions imageProperties]);
CGImageDestinationFinalize(dest);
CFRelease(dest);
}
}
}
I have no idea what I'm doing wrong?
Thank you
Jens

Resize and Save NSImage?

I have an NSImageView which I get an image for from an NSOpenPanel. That works great.
Now, how can I take that NSImage, half its size and save it as the same format in the same directory as the original as well?
If you can help at all with anything I'd appreciate it, thanks.
Check the ImageCrop sample project from Matt Gemmell:
http://mattgemmell.com/source/
Nice example how to resize / crop images.
Finally you can use something like this to save the result (dirty sample):
// Write to TIF
[[resultImg TIFFRepresentation] writeToFile:#"/Users/Anne/Desktop/Result.tif" atomically:YES];
// Write to JPG
NSData *imageData = [resultImg TIFFRepresentation];
NSBitmapImageRep *imageRep = [NSBitmapImageRep imageRepWithData:imageData];
NSDictionary *imageProps = [NSDictionary dictionaryWithObject:[NSNumber numberWithFloat:0.9] forKey:NSImageCompressionFactor];
imageData = [imageRep representationUsingType:NSJPEGFileType properties:imageProps];
[imageData writeToFile:#"/Users/Anne/Desktop/Result.jpg" atomically:NO];
Since NSImage objects are immutable you will have to:
Create a Core Graphics context the size of the new image.
Draw the NSImage into the CGContext. It should automatically scale it for you.
Create an NSImage from that context
Write out the new NSImage
Don't forget to release any temporary objects you allocated.
There are definitely other options, but this is the first one that came to mind.
+(NSImage*) resize:(NSImage*)aImage scale:(CGFloat)aScale
{
NSImageView* kView = [[NSImageView alloc] initWithFrame:NSMakeRect(0, 0, aImage.size.width * aScale, aImage.size.height* aScale)];
[kView setImageScaling:NSImageScaleProportionallyUpOrDown];
[kView setImage:aImage];
NSRect kRect = kView.frame;
NSBitmapImageRep* kRep = [kView bitmapImageRepForCachingDisplayInRect:kRect];
[kView cacheDisplayInRect:kRect toBitmapImageRep:kRep];
NSData* kData = [kRep representationUsingType:NSJPEGFileType properties:nil];
return [[NSImage alloc] initWithData:kData];
}
Here is a specific implementation
-(NSImage*)resizeImage:(NSImage*)input by:(CGFloat)factor
{
NSSize size = NSZeroSize;
size.width = input.size.width*factor;
size.height = input.size.height*factor;
NSImage *ret = [[NSImage alloc] initWithSize:size];
[ret lockFocus];
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleBy:factor];
[transform concat];
[input drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeCopy fraction:1.0];
[ret unlockFocus];
return [ret autorelease];
}
Keep in mind that this is pixel based, with HiDPI the scaling must be taken into account, it is simple to obtain :
-(CGFloat)pixelScaling
{
NSRect pixelBounds = [self convertRectToBacking:self.bounds];
return pixelBounds.size.width/self.bounds.size.width;
}
Apple has source code for downscaling and saving images found here
http://developer.apple.com/library/mac/#samplecode/Reducer/Introduction/Intro.html
Here is some code that makes a more extensive use of Core Graphics than other answers. It's made according to hints in Mark Thalman's answer to this question.
This code downscales an NSImage based on a target image width. It's somewhat nasty, but still useful as an extra sample for documenting how to draw an NSImage in a CGContext, and how to write contents of CGBitmapContext and CGImage into a file.
You may want to add extra error checking. I didn't need it for my use case.
- (void)generateThumbnailForImage:(NSImage*)image atPath:(NSString*)newFilePath forWidth:(int)width
{
CGSize size = CGSizeMake(width, image.size.height * (float)width / (float)image.size.width);
CGColorSpaceRef rgbColorspace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast;
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width * 4, rgbColorspace, bitmapInfo);
NSGraphicsContext * graphicsContext = [NSGraphicsContext graphicsContextWithGraphicsPort:context flipped:NO];
[NSGraphicsContext setCurrentContext:graphicsContext];
[image drawInRect:NSMakeRect(0, 0, size.width, size.height) fromRect:NSMakeRect(0, 0, image.size.width, image.size.height) operation:NSCompositeCopy fraction:1.0];
CGImageRef outImage = CGBitmapContextCreateImage(context);
CFURLRef outURL = (CFURLRef)[NSURL fileURLWithPath:newFilePath];
CGImageDestinationRef outDestination = CGImageDestinationCreateWithURL(outURL, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(outDestination, outImage, NULL);
if(!CGImageDestinationFinalize(outDestination))
{
NSLog(#"Failed to write image to %#", newFilePath);
}
CFRelease(outDestination);
CGImageRelease(outImage);
CGContextRelease(context);
CGColorSpaceRelease(rgbColorspace);
}
To resize image
- (NSImage *)scaleImage:(NSImage *)anImage newSize:(NSSize)newSize
{
NSImage *sourceImage = anImage;
if ([sourceImage isValid])
{
if (anImage.size.width == newSize.width && anImage.size.height == newSize.height && newSize.width <= 0 && newSize.height <= 0) {
return anImage;
}
NSRect oldRect = NSMakeRect(0.0, 0.0, anImage.size.width, anImage.size.height);
NSRect newRect = NSMakeRect(0,0,newSize.width,newSize.height);
NSImage *newImage = [[NSImage alloc] initWithSize:newSize];
[newImage lockFocus];
[sourceImage drawInRect:newRect fromRect:oldRect operation:NSCompositeCopy fraction:1.0];
[newImage unlockFocus];
return newImage;
}
}