My iPad app that I am creating has to be able to create the tiles for a 4096x2992 image that is generated earlier in my app..
4096x2992 image isn't very complex (what i'm testing with) and when written to file in png format is approximately 600kb...
On the simulator, this code seems to work fine however when I run the app in tighter memory conditions (on my iPad) the process quits because it ran out of memory...
I've been using the same code in the app previously what was working fine (was only creating tiles for 3072x2244 images however)...
Either I must be doing something stupidly wrong or my #autoreleasepool's aren't working as they should (i think i mentioned that im using ARC)... When running in instruments I can just see the memory used climb up until ~500mb where it then crashes!
I've analysed the code and it hasn't found a single memory leak related to this part of my app so I'm really confused on why this is crashing on me...
Just a little history on how my function gets called so you know whats happening... The app uses CoreGraphics to render a UIView (4096x2992) with some UIImageView's inside it then it sends that UIImage reference into my function buildFromImage: (below) where it begins cutting up/resizing the image to create my file...
Here is the buildFromImage: code... the memory issues are built up from within the main loop under NSLog(#"LOG ------------> Begin tile loop ");...
-(void)buildFromImage:(UIImage *)__image {
NSLog(#"LOG ------------> Begin Build ");
//if the __image is over 4096 width of 2992 height then we must resize it! (stop crashes ect)
if (__image.size.width > __image.size.height) {
if (__image.size.width > 4096) {
__image = [self resizeImage:__image toSize:CGSizeMake(4096, (__image.size.height * 4096 / __image.size.width))];
}
} else {
if (__image.size.height > 2992) {
__image = [self resizeImage:__image toSize:CGSizeMake((__image.size.width * 2992 / __image.size.height), 2992)];
}
}
//create preview image (if landscape, no more than 748 high... if portrait, no more than 1004 high) must keep scale
NSString *temp_archive_store = [[NSString alloc] initWithFormat:#"%#/%i-temp_imgdat.zip",NSTemporaryDirectory(),arc4random()];
NSString *temp_tile_store = [[NSString alloc] initWithFormat:#"%#/%i-temp_tilestore/",NSTemporaryDirectory(),arc4random()];
//create the temp dir for the tile store
[[NSFileManager defaultManager] createDirectoryAtPath:temp_tile_store withIntermediateDirectories:YES attributes:nil error:nil];
//create each tile and add it to the compressor once its made
//size of tile
CGSize tile_size = CGSizeMake(256, 256);
//the scales that we will be generating the tiles too
NSMutableArray *scales = [[NSMutableArray alloc] initWithObjects:[NSNumber numberWithInt:1000],[NSNumber numberWithInt:500],[NSNumber numberWithInt:250],[NSNumber numberWithInt:125], nil]; //scales to loop round over
NSLog(#"LOG ------------> Begin tile loop ");
#autoreleasepool {
//loop through the scales
for (NSNumber *scale in scales) {
//scale the image
UIImage *imageForScale = [self resizedImage:__image scale:[scale intValue]];
//calculate number of rows...
float rows = ceil(imageForScale.size.height/tile_size.height);
//calulate number of collumns
float cols = ceil(imageForScale.size.width/tile_size.width);
//loop through rows and cols
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++) {
NSLog(#"LOG ------> Creating Tile (%i,%i,%i)",col,row,[scale intValue]);
//build name for tile...
NSString *tile_name = [NSString stringWithFormat:#"%#_%i_%i_%i.png",#"image",[scale intValue],col,row];
#autoreleasepool {
//build tile for this coordinate
UIImage *tile = [self tileForRow:row column:col size:tile_size image:imageForScale];
//convert image to png data
NSData *tile_data = UIImagePNGRepresentation(tile);
[tile_data writeToFile:[NSString stringWithFormat:#"%#%#",temp_tile_store,tile_name] atomically:YES];
}
}
}
}
}
}
Here are my resizing/cropping functions too as these could also be causing the issue..
-(UIImage *)resizeImage:(UIImage *)inImage toSize:(CGSize)scale {
#autoreleasepool {
CGImageRef inImageRef = [inImage CGImage];
CGColorSpaceRef clrRf = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL, ceil(scale.width), ceil(scale.height), CGImageGetBitsPerComponent(inImageRef), CGImageGetBitsPerPixel(inImageRef)*ceil(scale.width), clrRf, kCGImageAlphaPremultipliedFirst );
CGColorSpaceRelease(clrRf);
CGContextDrawImage(ctx, CGRectMake(0, 0, scale.width, scale.height), inImageRef);
CGImageRef img = CGBitmapContextCreateImage(ctx);
UIImage *image = [[UIImage alloc] initWithCGImage:img scale:1 orientation:UIImageOrientationUp];
CGImageRelease(img);
CGContextRelease(ctx);
return image;
}
}
- (UIImage *)tileForRow: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
#autoreleasepool {
//get the selected tile
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef inImageRef = [inImage CGImage];
CGImageRef tiledImage = CGImageCreateWithImageInRect(inImageRef, subRect);
UIImage *tileImage = [[UIImage alloc] initWithCGImage:tiledImage scale:1 orientation:UIImageOrientationUp];
CGImageRelease(tiledImage);
return tileImage;
}
}
Now I never use to be that good with memory management, so I did take the time to read up on it and also converted my project to ARC to see if that could address my issues (that was a while ago) but from the results i get after profiling it in instruments I must be doing something STUPIDLY wrong for the memory to leak as bad as it does but I just can't see what i'm doing wrong.
If anybody can point out anything I may be doing wrong it would be great!
Thanks
Liam
(let me know if you need more info)
I use "UIImage+Resize". Inside #autoreleasepool {} it works fine with ARC in a loop.
https://github.com/AliSoftware/UIImage-Resize
-(void)compress:(NSString *)fullPathToFile {
#autoreleasepool {
UIImage *fullImage = [[UIImage alloc] initWithContentsOfFile:fullPathToFile];
UIImage *compressedImage = [fullImage resizedImageByHeight:1024];
NSData *compressedData = UIImageJPEGRepresentation(compressedImage, 75.0);
[compressedData writeToFile:fullPathToFile atomically:NO];
}
}
Related
before posting this question here, i have read all the materials and similar posts on it but i cant get the main "idea" what is happening and how to fix it, in 10 of the similar question, everyone was fixing this problem with #autoreleasepool in this case i was unable to achive my goal. So while converting cvMat to UIImage i have increasing memory depending on size.
Below are step which i am doing before converting mat to uiimage:
cv::Mat undistorted = cv::Mat(cvSize(maxWidth,maxHeight), CV_8UC1);
cv::Mat original = [MatStructure convertUIImageToMat:adjustedImage];
cv::warpPerspective(original, undistorted, cv::getPerspectiveTransform(src, dst), cvSize(maxWidth, maxHeight));
original.release();
adjustedImage = [MatStructure convertMatToUIImage:undistorted];
undistorted.release();
problem is visible while i am converting my mat to uiimage, memory goes up to 400 mb and on every cycle it rises.
+ (UIImage *) convertMatToUIImage: (cv::Mat) cvMat {
NSData *data = [NSData dataWithBytes:cvMat.data length:cvMat.elemSize() * cvMat.total()];
CGColorSpaceRef colorSpace;
if (cvMat.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((__bridge CFDataRef) data);
CGBitmapInfo bmInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
CGImageRef imageRef = CGImageCreate(cvMat.cols, // width
cvMat.rows, // height
8, // bits per component
8 * cvMat.elemSize(), // bits per pixel
cvMat.step.p[0], // bytesPerRow
colorSpace, // colorspace
bmInfo, // bitmap info
provider, // CGDataProviderRef
NULL, // decode
false, // should interpolate
kCGRenderingIntentDefault // intent
);
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
cvMat.release(); // this line is optional.
return image;
}
I have seen many similar code but every single example works as this one.
I belive that problem holds in (__bridge CFDataRef) and ARC cant clean up this data, if i will try to CFRelease((__bridge CFDataRef)data) than will happen crash because program will search for allocated memory and it will be freed already so it will run to crash.
I am using openCV3 and have tried their method MatToUIImage but problem still exsits, on leaks profiler there are no leaks at all, and most expensive task in memory is convertMatToUIImage.
I am reading all day about it but actually can't find any useful solution yet.
Currently i work on swift 3.0 which inherits class XXX and it uses objC class to crop something and than return to UIImage as well. In deinit i am assigning this inherited class property nil, but problem still exsists.Also i think that dataWithBytes is duplicating memory like if i have 16MB at start after creating NSData it will be 32MB..
And please if you can suggests useful threads about this problem i will be glad to read them all. Thanks for help
After working on this problem more than three days, i had to rewrite function and it worked 100%, i have tested on five different devices.
CFRelease, Free() and #autoreleasepool did not helped me at all and i implemented this:
data = UIImageJPEGRepresentation([[UIImage alloc] initWithCGImage:imageRef], 0.2f); // because images are 30MB and up
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *appFile = [documentsDirectory stringByAppendingPathComponent:#"MyFile.jpeg"];
[data writeToFile:appFile atomically:NO];
data = nil;
after this solution everything worked fine. So i grab the UIImage and converting to NSData, after that we should save it to the local directory and the only thing left is to read the data from directory. hope this thread will help someone one day.
My current method is:
CGDataProviderRef provider = CGImageGetDataProvider(imageRef);
imageData.rawData = CGDataProviderCopyData(provider);
imageData.imageData = (UInt8 *) CFDataGetBytePtr(imageData.rawData);
I only get about 30 frames per second. I know part of the performance hit is copying the data, it'd be nice if I could just have access to the stream of bytes and not have it automatically create a copy for me.
I'm trying to get it to process CGImageRefs as fast as possible, is there a faster way?
Here's my working solutions snippet:
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
// Insert code here to initialize your application
//timer = [NSTimer scheduledTimerWithTimeInterval:1.0/60.0 //2000.0
// target:self
// selector:#selector(timerLogic)
// userInfo:nil
// repeats:YES];
leagueGameState = [LeagueGameState new];
[self updateWindowList];
lastTime = CACurrentMediaTime();
// Create a capture session
mSession = [[AVCaptureSession alloc] init];
// Set the session preset as you wish
mSession.sessionPreset = AVCaptureSessionPresetMedium;
// If you're on a multi-display system and you want to capture a secondary display,
// you can call CGGetActiveDisplayList() to get the list of all active displays.
// For this example, we just specify the main display.
// To capture both a main and secondary display at the same time, use two active
// capture sessions, one for each display. On Mac OS X, AVCaptureMovieFileOutput
// only supports writing to a single video track.
CGDirectDisplayID displayId = kCGDirectMainDisplay;
// Create a ScreenInput with the display and add it to the session
AVCaptureScreenInput *input = [[AVCaptureScreenInput alloc] initWithDisplayID:displayId];
input.minFrameDuration = CMTimeMake(1, 60);
//if (!input) {
// [mSession release];
// mSession = nil;
// return;
//}
if ([mSession canAddInput:input]) {
NSLog(#"Added screen capture input");
[mSession addInput:input];
} else {
NSLog(#"Couldn't add screen capture input");
}
//**********************Add output here
//dispatch_queue_t _videoDataOutputQueue;
//_videoDataOutputQueue = dispatch_queue_create( "com.apple.sample.capturepipeline.video", DISPATCH_QUEUE_SERIAL );
//dispatch_set_target_queue( _videoDataOutputQueue, dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_HIGH, 0 ) );
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
videoOut.videoSettings = #{ (id)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32BGRA) };
[videoOut setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
// RosyWriter records videos and we prefer not to have any dropped frames in the video recording.
// By setting alwaysDiscardsLateVideoFrames to NO we ensure that minor fluctuations in system load or in our processing time for a given frame won't cause framedrops.
// We do however need to ensure that on average we can process frames in realtime.
// If we were doing preview only we would probably want to set alwaysDiscardsLateVideoFrames to YES.
videoOut.alwaysDiscardsLateVideoFrames = YES;
if ( [mSession canAddOutput:videoOut] ) {
NSLog(#"Added output video");
[mSession addOutput:videoOut];
} else {NSLog(#"Couldn't add output video");}
// Start running the session
[mSession startRunning];
NSLog(#"Set up session");
}
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
//NSLog(#"Captures output from sample buffer");
//CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription( sampleBuffer );
/*
if ( self.outputVideoFormatDescription == nil ) {
// Don't render the first sample buffer.
// This gives us one frame interval (33ms at 30fps) for setupVideoPipelineWithInputFormatDescription: to complete.
// Ideally this would be done asynchronously to ensure frames don't back up on slower devices.
[self setupVideoPipelineWithInputFormatDescription:formatDescription];
}
else {*/
[self renderVideoSampleBuffer:sampleBuffer];
//}
}
- (void)renderVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
//CVPixelBufferRef renderedPixelBuffer = NULL;
//CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
//[self calculateFramerateAtTimestamp:timestamp];
// We must not use the GPU while running in the background.
// setRenderingEnabled: takes the same lock so the caller can guarantee no GPU usage once the setter returns.
//#synchronized( _renderer )
//{
// if ( _renderingEnabled ) {
CVPixelBufferRef sourcePixelBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );
const int kBytesPerPixel = 4;
CVPixelBufferLockBaseAddress( sourcePixelBuffer, 0 );
int bufferWidth = (int)CVPixelBufferGetWidth( sourcePixelBuffer );
int bufferHeight = (int)CVPixelBufferGetHeight( sourcePixelBuffer );
size_t bytesPerRow = CVPixelBufferGetBytesPerRow( sourcePixelBuffer );
uint8_t *baseAddress = CVPixelBufferGetBaseAddress( sourcePixelBuffer );
int count = 0;
for ( int row = 0; row < bufferHeight; row++ )
{
uint8_t *pixel = baseAddress + row * bytesPerRow;
for ( int column = 0; column < bufferWidth; column++ )
{
count ++;
pixel[1] = 0; // De-green (second pixel in BGRA is green)
pixel += kBytesPerPixel;
}
}
CVPixelBufferUnlockBaseAddress( sourcePixelBuffer, 0 );
//NSLog(#"Test Looped %d times", count);
CIImage *ciImage = [CIImage imageWithCVImageBuffer:sourcePixelBuffer];
/*
CIContext *temporaryContext = [CIContext contextWithCGContext:
[[NSGraphicsContext currentContext] graphicsPort]
options: nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(sourcePixelBuffer),
CVPixelBufferGetHeight(sourcePixelBuffer))];
*/
//UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
// Create a bitmap rep from the image...
NSBitmapImageRep *bitmapRep = [[NSBitmapImageRep alloc] initWithCIImage:ciImage];
// Create an NSImage and add the bitmap rep to it...
NSImage *image = [[NSImage alloc] init];
[image addRepresentation:bitmapRep];
// Set the output view to the new NSImage.
[imageView setImage:image];
//CGImageRelease(videoImage);
//renderedPixelBuffer = [_renderer copyRenderedPixelBuffer:sourcePixelBuffer];
// }
// else {
// return;
// }
//}
//Profile code? See how fast it's running?
if (CACurrentMediaTime() - lastTime > 3) //10 seconds
{
float time = CACurrentMediaTime() - lastTime;
[fpsText setStringValue:[NSString stringWithFormat:#"Elapsed Time: %f ms, %f fps", time * 1000 / loopsTaken, (1000.0)/(time * 1000.0 / loopsTaken)]];
lastTime = CACurrentMediaTime();
loopsTaken = 0;
[self updateWindowList];
if (leagueGameState.leaguePID == -1) {
[statusText setStringValue:#"No League Instance Found"];
}
}
else
{
loopsTaken++;
}
}
I get a very nice 60 frames per second even after looping through the data.
It captures the screen, I get the data, I modify the data and I re-show the data.
Which "stream of bytes" do you mean? CGImage represents the final bitmap data, but under the hood it may still be compressed. The bitmap may currently be stored on the GPU, so getting to it might require a GPU->CPU fetch (which is expensive, and should be avoided when you don't need it).
If you're trying to do this at greater than 30fps, you may want to rethink how you're attacking the problem, and use tools designed for that, like Core Image, Core Video, or Metal. Core Graphics is optimized for display, not processing (and definitely not real-time processing). A key difference in tools like Core Image is that you can perform more of your work on the GPU without shuffling data back to the CPU. This is absolutely critical for maintaining fast pipelines. Whenever possible, you want to avoid getting the actual bytes.
If you have a CGImage already, you can convert it to a CIImage with imageWithCGImage: and then use CIImage to process it further. If you really need access to the bytes, your options are the one you're using, or to render it into a bitmap context (which also will require copying) with CGContextDrawImage. There's just no promise that a CGImage has a bunch of bitmap bytes hanging around at any given time that you can look at, and it doesn't provide "lock your buffer" methods like you'll find in real-time frameworks like Core Video.
Some very good introductions to high-speed image processing from WWDC videos:
WWDC 2013 Session 509 Core Image Effects and Techniques
WWDC 2014 Session 514 Advances in Core Image
WWDC 2014 Sessions 603-605 Working with Metal
I've done an exhaustive search on this and I know similar questions have been posted before about NSBitmapImageRep, but none of them seem specific to what I'm trying to do which is simply:
Read in an image from the desktop (but NOT display it)
Create an NSBitmap representation of that image
Iterate through the pixels to change some colours
Save the modified bitmap representation as a separate file
Since I've never worked with bitmaps before I thought I'd just try to create and save one first, and worry about modifying pixels later. That seemed really straightforward, but I just can't get it to work. Apart from the file saving aspect, most of the code is borrowed from another answer found on StackOverflow and shown below:
-(void)processBitmapImage:(NSString*)aFilepath
{
NSImage *theImage = [[NSImage alloc] initWithContentsOfFile:aFilepath];
if (theImage)
{
CGImageRef CGImage = [theImage CGImageForProposedRect:nil context:nil hints:nil];
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithCGImage:CGImage];
NSInteger width = [imageRep pixelsWide];
NSInteger height = [imageRep pixelsHigh];
long rowBytes = [imageRep bytesPerRow];
// above matches the original size indicating NSBitmapImageRep was created successfully
printf("WIDE pix = %ld\n", width);
printf("HIGH pix = %ld\n", height);
printf("Row bytes = %ld\n", rowBytes);
// We'll worry about this part later...
/*
unsigned char* pixels = [imageRep bitmapData];
int row, col;
for (row=0; row < height; row++)
{
// etc ...
for (col=0; col < width; col++)
{
// etc...
}
}
*/
// So, let's see if we can just SAVE the (unmodified) bitmap first ...
NSData *pngData = [imageRep representationUsingType: NSPNGFileType properties: nil];
NSString *destinationStr = [self pathForDataFile];
BOOL returnVal = [pngData writeToFile:destinationStr atomically: NO];
NSLog(#"did we succeed?:%#", (returnVal ? #"YES": #"NO")); // the writeToFile call FAILS!
[imageRep release];
}
[theImage release];
}
While I like this code for its simplicity, another potential issue down the road might be that Apple docs advise us treat bitmaps returned with 'initWithCGImage' as read-only objects…
Can anyone please tell me where I'm going wrong with this code, and how I could modify it to work. While the overall concept looks okay to my non-expert eye, I suspect I'm making a dumb mistake and overlooking something quite basic. Thanks in advance :-)
That's a fairly roundabout way to create the NSBitmapImageRep. Try creating it like this:
NSBitmapImageRep* imageRep = [NSBitmapImageRep imageRepWithContentsOfFile:aFilepath];
Of course, the above does not give you ownership of the image rep object, so don't release it at the end.
I've been tasked to solve a memory leak with a custom objective class for a legacy app that uses garbage collection.
The class takes in NSData from a jpeg file and can thumb the image. There is a method to return a new NSData object with the newly resized image.
ImageThumber * imgt = [ImageThumber withNSData:dataObjectFromJpeg];
[imgt thumbImage:1024];
NSdata * smallImage = [imgt imageData];
[imgt thumbImage:256];
NSdata * extraSmallImage = [imgt imageData];
It does what it's supposed to do but it's been discovered that for every ImageThumber that's created it allocates a ImageIO_jpeg_Data object that's never deallocated. This was found in Instruments.
When created using ImageThumber withNSData:(NSData) it creates a CGImage and stores it in a private CGImageRef variable
I found that if thumbImage:(int) isn't called the ImageIO_jpeg_Data will deallocate when ImageThumber is deallocated which leads me to believe the problem lies somewhere within the thumbImage method. If thumbImage method is called multiple times it doesn't create extra ImageIO_jpeg_Data.
I have little experience with core graphics and garbage collection.
+(id)SAMImageDataWithNSData:(NSData *)data
{
SAMImageData * new = [[[self alloc] init] autorelease];
new.imageData = [NSMutableData dataWithData:data];
CFDataRef imgData = (CFDataRef)data;
CGDataProviderRef imgDataProvider;
imgDataProvider = CGDataProviderCreateWithCFData(imgData);
new->_cgImageRef = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(imgDataProvider);
int width = (int)CGImageGetWidth(new->_cgImageRef);
int height = (int)CGImageGetHeight(new->_cgImageRef);
new.originalSize = NSMakeSize(width, height);
return new;
}
-(void)thumbImage:(int)length
{
/* simple logic to calculate new width and height */
//If the next line is commented out the problem doesn't exist.
//You just don't get the image resize.
[self resizeCGImageToWidth:newSize.width andHeight:newSize.height];
CFMutableDataRef workingData = (CFMutableDataRef)[[NSMutableData alloc] initWithCapacity:0];
CGImageDestinationRef dest;
dest = CGImageDestinationCreateWithData(workingData,kUTTypeJPEG,1,NULL);
CGImageDestinationAddImage(dest,_cgImageRef,NULL);
CGImageDestinationFinalize(dest);
CFRelease(dest);
self.imageData = (NSMutableData *)workingData;
}
This where I believe the problem exists:
- (void)resizeCGImageToWidth:(int)width andHeight:(int)height {
CGColorSpaceRef colorspace = CGImageGetColorSpace(_cgImageRef);
CGContextRef context = CGBitmapContextCreate(NULL, width, height,
CGImageGetBitsPerComponent(_cgImageRef),
CGImageGetBytesPerRow(_cgImageRef),
colorspace,
kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorspace);
if(context == NULL)
return;
CGContextDrawImage(context, CGRectMake(0, 0, width, height), _cgImageRef);
_cgImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
}
Basically I am doing a screen capture app with optimum image size.
I am trying to capture my Mac screen and save it as JPEG file.
I wanted to capture the screen using Cocoa API [CGWindowListCreateImage] and save the file as JPEG using libjpeg9. I included the static library [libjpeg.a] and the header files[jpeglib.h, jconfig.h, jerror.h and jmorecfg.h] using the "Add Files to Project_Name" option in XCode.
The code I used to save as JPEG file is
int write_jpeg_file( char *filename )
{
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
/* this is a pointer to one row of image data */
JSAMPROW row_pointer[1];
FILE *outfile = fopen( filename, "wb" );
if ( !outfile )
{
printf("Error opening output jpeg file %s\n!", filename );
return -1;
}
cinfo.err = jpeg_std_error( &jerr );
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, outfile);
/* Setting the parameters of the output file here */
cinfo.image_width = width;
cinfo.image_height = height;
cinfo.input_components = bytes_per_pixel;
cinfo.in_color_space = color_space;
/* default compression parameters, we shouldn't be worried about these */
jpeg_set_defaults( &cinfo );
/* Now do the compression .. */
jpeg_start_compress( &cinfo, TRUE );
/* like reading a file, this time write one row at a time */
while( cinfo.next_scanline < cinfo.image_height )
{
row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components];
jpeg_write_scanlines( &cinfo, row_pointer, 1 );
}
/* similar to read file, clean up after we're done compressing */
jpeg_finish_compress( &cinfo );
jpeg_destroy_compress( &cinfo );
fclose( outfile );
/* success code is 1! */
return 1;
}
When I Build and Run the project I am getting following error
Parse Issue
Expected }
in the line
typedef enum { FALSE = 0, TRUE = 1 } boolean;
in jmorecfg.h which is libjpeg header file. I am not sure why I am getting this error. Please help me.
I know there is a way to save the file using Cocoa API instead of using libjpeg.
I first tried in XCode using Cocoa. I am able to get the JPEG file. The file size is relatively bigger(~200KB). When I create the image in C using libjpeg the file size is ~100KB.
The code I used:
CGFloat imageCompression = 0.6;
- (NSImage *)captureImageForRect:(NSRect)rect {
CGImageRef screenShot = CGWindowListCreateImage(CGRectInfinite, kCGWindowListOptionOnScreenOnly, kCGNullWindowID, kCGWindowImageDefault);
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithCGImage:screenShot];
NSLog(#"Bits per pixel : %ld", [imageRep bitsPerPixel]);
NSImage *result = [[NSImage alloc] init];
[result addRepresentation:imageRep];
screenShot=nil;
imageRep=nil;
return result;
}
-(NSData *)jpegReresentationOfImage:(NSImage *) image {
NSBitmapImageRep* myBitmapImageRep=nil;
NSSize imageSize = [image size];
[image lockFocus];
NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
myBitmapImageRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:imageRect];
[image unlockFocus];
// set up the options for creating a JPEG
NSDictionary* options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:imageCompression], NSImageCompressionFactor,
[NSNumber numberWithBool:NO], NSImageProgressive,
nil];
NSData* imageData = [myBitmapImageRep representationUsingType:NSJPEGFileType properties:options];
myBitmapImageRep=nil;
return ( imageData);
}
-(void)captureAndSave {
NSString *fileName = #"/Volumes/Official/images/output/Mac_Window_1.jpg";
NSImage *screenshot = [self captureImageForRect:[[NSScreen mainScreen] frame]];
NSData *jpgRep = [self jpegReresentationOfImage:screenshot];
[jpgRep writeToFile:[fileName stringByExpandingTildeInPath] atomically:NO];
[NSApp terminate:self];
}
Adding the following line to jconfig.h solved the problem. After adding this, I am not getting compile error and I am able to use libjpeg inside my Objetive-C program.
#define USE_MAC_MEMMGR
In jmemmac.c, I found the following code which helped me to fix it. Just adding the above line worked!
#ifndef USE_MAC_MEMMGR /* make sure user got configuration right */
You forgot to define USE_MAC_MEMMGR in jconfig.h. /* deliberate syntax error */
#endif