How to create a ICNS icon programmatically? - objective-c

OK this is what I want :
Take some NSImages
Add them to an ICNS file
Save it
This is what I've done so far (purely as a test) :
- (CGImageRef)refFromImage:(NSImage*)img
{
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)[img TIFFRepresentation], NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
return maskRef;
}
- (void)awakeFromNib
{
NSImage* img1 = [NSImage imageNamed:#"image1"];
NSImage* img2 = [NSImage imageNamed:#"image2"];
NSLog(#"%#",img1);
CGImageRef i1 = [self refFromImage:img1];
CGImageRef i2 = [self refFromImage:img2];
NSURL *fileURL = [NSURL fileURLWithPath:[#"~/Documents/final.icns" stringByExpandingTildeInPath]];
CGImageDestinationRef dr = CGImageDestinationCreateWithURL((__bridge CFURLRef)fileURL, kUTTypeAppleICNS , 1, NULL);
CGImageDestinationAddImage(dr, i1, NULL);
CGImageDestinationAddImage(dr, i2, NULL);
/* Even tried adding 'multiple' times
CGImageDestinationAddImage(dr, i1, NULL);
CGImageDestinationAddImage(dr, i2, NULL);
CGImageDestinationAddImage(dr, i1, NULL);
CGImageDestinationAddImage(dr, i2, NULL);
*/
CGImageDestinationFinalize(dr);
CFRelease(dr);
}
But, it still keeps throwing an error :
ImageIO: CGImageDestinationFinalize image destination does
not have enough images
What's wrong with my code?
I've had a look at the answers below, but still nothing :
Save CGImageRef to PNG file errors? (ARC Caused?)
How exactly to make a CGImageRef from an image on disk

You can use IconFamily.
IconFamily is a Cocoa/Objective-C wrapper for the Mac OS X Carbon
API's "icon family" data type. Its main purpose is to enable Cocoa
applications to easily create custom file icons from NSImage
instances, and thus take advantage of Mac OS X's high-resolution RGBA
"thumbnail" icon formats to provide richly detailed thumbnail previews
of the files' contents.
NSImage *mImage = [[NSImage alloc] initWithContentsOfFile:#"/Users/Username/Desktop/WhiteTiger.jpg"];
IconFamily *fam = [IconFamily iconFamilyWithThumbnailsOfImage:mImage];
[fam writeToFile:#"/Users/Username/Desktop/WhiteTiger.icns"];

Related

How to allow forms(pdf) to be re-editable after they have been saved(Objective C)

I'm new to CoreGraphics framework.
We have used ILPDFKit library to render PDF or form
We embedded the drawn paths to existing PDF.Here is the code
-(NSData *)embededPdfAnnotationPointsInPdfAtPath:(NSString *)pdfPath
{
NSURL *pdfUrl = [NSURL fileURLWithPath:pdfPath];
CGPDFDocumentRef pdf = CGPDFDocumentCreateWithURL((CFURLRef)pdfUrl);
NSMutableData *data = [NSMutableData new];
UIGraphicsBeginPDFContextToData(data, CGRectZero, nil);
for(NSUInteger pageIndex = 1; pageIndex <= [self pdfDrawViewInfo].count; pageIndex ++)
{
// Get the current page and page frame
CGPDFPageRef pdfPage = CGPDFDocumentGetPage(pdf, pageIndex);
const CGRect pageFrame = CGPDFPageGetBoxRect(pdfPage, kCGPDFMediaBox);
UIGraphicsBeginPDFPageWithInfo(pageFrame, nil);
// Draw the page (flipped)
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSaveGState(ctx);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -pageFrame.size.height);
CGContextDrawPDFPage(ctx, pdfPage);
CGContextRestoreGState(ctx);
UIImage *drawViewImage = [self annotatedImageForPdfPageAtIndex:(pageIndex - 1)];
UIImage *annotatedImgForPdfPage = [self imageWithImage:drawViewImage scaledToSize:pageFrame.size] ;
[annotatedImgForPdfPage drawInRect:pageFrame];
}
UIGraphicsEndPDFContext();
// CGPDFDocumentRelease(pdf);
// CGImageRelease(annotatedImgCGref);
return data;
}
Above code indicates, we are just pasting image(drawn paths) on existing PDF.
Later functionality changed as "After saving PDF with annotations(freehand drawing), we need to have control over annotated stuff"
Question:
Edit PDF with annotation(freehand drawing), save it.After saving the PDF with annotated stuff, we need to again gain its editing capability. We need to know the saving mechanism and steps of this process.Please put your views here, so that it help me a lot.
Thanks inadvance
I would probably conserve the pdf original version, and store the edits in another file.
Than each time the user save, the app apply the annotations and other stuff to original version and generate the final pdf (in case of sharing or exportation features)
That would give you a complete control, (i guess this is also the same way "apple photo", "iPhoto" and "Aperture" manage the pictures modifications...)

How to convert hex data to UIImage?

I like to display .cpbitmap (its the file formate iOS saves the wallpapers) in a UIImageView. The problem is I need to convert it. I already figure out that if you get its Data (NSData) and convert every bit you get the UIColor, so the first Bit is R, then B, then G and then Alpha (I think). Now I need to "draw" an UIImage out of the info. Does anyone know how to do this?
Here is the link to the .cpbitmap file: https://www.dropbox.com/s/s9v4lahixm9cuql/LockBackground.cpbitmap
It would be really cool if someone can help me,
Thanks
EDIT
I found a working python script, is someone able to translate it to Objective
#!/usr/bin/python
from PIL import Image,ImageOps
import struct
import sys
if len(sys.argv) < 3:
print "Need two args: filename and result_filename\n";
sys.exit(0)
filename = sys.argv[1]
result_filename = sys.argv[2]
with open(filename) as f:
contents = f.read()
unk1, width, height, unk2, unk3, unk4 = struct.unpack('<6i', contents[-24:])
im = Image.fromstring('RGBA', (width,height), contents, 'raw', 'RGBA', 0, 1)
r,g,b,a = im.split()
im = Image.merge('RGBA', (b,g,r,a))
im.save(result_filename)
The basic process of converting RGBA data into an image is to create a CGDataProviderRef with the raw data, and then use CGImageCreate to create a CGImageRef, from which you can easily generate a UIImage. So, that gives you something like:
- (UIImage *) imageForBitmapData:(NSData *)data size:(CGSize)size
{
void * bitmapData;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
int bitmapBytesPerRow = (size.width * 4);
int bitmapByteCount = (bitmapBytesPerRow * size.height);
bitmapData = malloc( bitmapByteCount );
NSAssert(bitmapData, #"Unable to create buffer");
[data getBytes:bitmapData length:bitmapByteCount];
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, bitmapData, bitmapByteCount, releasePixels);
CGImageRef imageRef = CGImageCreate(size.width,
size.height,
8,
32,
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo)kCGImageAlphaLast,
provider,
NULL,
NO,
kCGRenderingIntentDefault);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
CGColorSpaceRelease(colorSpace);
CGDataProviderRelease(provider);
return image;
}
With a releasePixels function defined as follows:
void releasePixels(void *info, const void *data, size_t size)
{
free((void*)data);
}
The only trick was identifying the dimensions of the bitmap. There are 4,142,592 bytes of image data (there is some extra stuff at the end of the file, which is self evident if you examine the file in hexadecimal in Xcode). That doesn't correlate to any standard device dimensions. But if you look at the possible values that divide evenly into 4,142,592, you get a couple of promising ones (496, 522, 558, 576, 696, 744, 899, 928, and 992). And if you just try those out, it becomes obvious that the image is 744 x 1392.
You can then use those dimensions with the above method, and you get your image.
While I discovered the size of the image empirically, I noticed that those dimensions were encoded at the end of the file. This is confirmed by your Python code, which suggests that the image width is the fifth from the last UInt32, and the height is the fourth from the last UInt32. Thus, you can use the above routine like so, extracting the dimensions from those two 32-bit integers encoded near the end of the file:
NSString *path = [[NSBundle mainBundle] pathForResource:#"LockBackground" ofType:#"cpbitmap"];
NSData *data = [NSData dataWithContentsOfFile:path];
NSAssert(data, #"no data found");
UInt32 width;
UInt32 height;
[data getBytes:&width range:NSMakeRange([data length] - sizeof(UInt32) * 5, sizeof(UInt32))];
[data getBytes:&height range:NSMakeRange([data length] - sizeof(UInt32) * 4, sizeof(UInt32))];
self.imageView.image = [self imageForBitmapData:data size:CGSizeMake(width, height)];

Using libjpeg from Objective C in XCode

Basically I am doing a screen capture app with optimum image size.
I am trying to capture my Mac screen and save it as JPEG file.
I wanted to capture the screen using Cocoa API [CGWindowListCreateImage] and save the file as JPEG using libjpeg9. I included the static library [libjpeg.a] and the header files[jpeglib.h, jconfig.h, jerror.h and jmorecfg.h] using the "Add Files to Project_Name" option in XCode.
The code I used to save as JPEG file is
int write_jpeg_file( char *filename )
{
struct jpeg_compress_struct cinfo;
struct jpeg_error_mgr jerr;
/* this is a pointer to one row of image data */
JSAMPROW row_pointer[1];
FILE *outfile = fopen( filename, "wb" );
if ( !outfile )
{
printf("Error opening output jpeg file %s\n!", filename );
return -1;
}
cinfo.err = jpeg_std_error( &jerr );
jpeg_create_compress(&cinfo);
jpeg_stdio_dest(&cinfo, outfile);
/* Setting the parameters of the output file here */
cinfo.image_width = width;
cinfo.image_height = height;
cinfo.input_components = bytes_per_pixel;
cinfo.in_color_space = color_space;
/* default compression parameters, we shouldn't be worried about these */
jpeg_set_defaults( &cinfo );
/* Now do the compression .. */
jpeg_start_compress( &cinfo, TRUE );
/* like reading a file, this time write one row at a time */
while( cinfo.next_scanline < cinfo.image_height )
{
row_pointer[0] = &raw_image[ cinfo.next_scanline * cinfo.image_width * cinfo.input_components];
jpeg_write_scanlines( &cinfo, row_pointer, 1 );
}
/* similar to read file, clean up after we're done compressing */
jpeg_finish_compress( &cinfo );
jpeg_destroy_compress( &cinfo );
fclose( outfile );
/* success code is 1! */
return 1;
}
When I Build and Run the project I am getting following error
Parse Issue
Expected }
in the line
typedef enum { FALSE = 0, TRUE = 1 } boolean;
in jmorecfg.h which is libjpeg header file. I am not sure why I am getting this error. Please help me.
I know there is a way to save the file using Cocoa API instead of using libjpeg.
I first tried in XCode using Cocoa. I am able to get the JPEG file. The file size is relatively bigger(~200KB). When I create the image in C using libjpeg the file size is ~100KB.
The code I used:
CGFloat imageCompression = 0.6;
- (NSImage *)captureImageForRect:(NSRect)rect {
CGImageRef screenShot = CGWindowListCreateImage(CGRectInfinite, kCGWindowListOptionOnScreenOnly, kCGNullWindowID, kCGWindowImageDefault);
NSBitmapImageRep *imageRep = [[NSBitmapImageRep alloc] initWithCGImage:screenShot];
NSLog(#"Bits per pixel : %ld", [imageRep bitsPerPixel]);
NSImage *result = [[NSImage alloc] init];
[result addRepresentation:imageRep];
screenShot=nil;
imageRep=nil;
return result;
}
-(NSData *)jpegReresentationOfImage:(NSImage *) image {
NSBitmapImageRep* myBitmapImageRep=nil;
NSSize imageSize = [image size];
[image lockFocus];
NSRect imageRect = NSMakeRect(0, 0, imageSize.width, imageSize.height);
myBitmapImageRep = [[NSBitmapImageRep alloc] initWithFocusedViewRect:imageRect];
[image unlockFocus];
// set up the options for creating a JPEG
NSDictionary* options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:imageCompression], NSImageCompressionFactor,
[NSNumber numberWithBool:NO], NSImageProgressive,
nil];
NSData* imageData = [myBitmapImageRep representationUsingType:NSJPEGFileType properties:options];
myBitmapImageRep=nil;
return ( imageData);
}
-(void)captureAndSave {
NSString *fileName = #"/Volumes/Official/images/output/Mac_Window_1.jpg";
NSImage *screenshot = [self captureImageForRect:[[NSScreen mainScreen] frame]];
NSData *jpgRep = [self jpegReresentationOfImage:screenshot];
[jpgRep writeToFile:[fileName stringByExpandingTildeInPath] atomically:NO];
[NSApp terminate:self];
}
Adding the following line to jconfig.h solved the problem. After adding this, I am not getting compile error and I am able to use libjpeg inside my Objetive-C program.
#define USE_MAC_MEMMGR
In jmemmac.c, I found the following code which helped me to fix it. Just adding the above line worked!
#ifndef USE_MAC_MEMMGR /* make sure user got configuration right */
You forgot to define USE_MAC_MEMMGR in jconfig.h. /* deliberate syntax error */
#endif

Large Image Processing (ARC) Major memory leaks

My iPad app that I am creating has to be able to create the tiles for a 4096x2992 image that is generated earlier in my app..
4096x2992 image isn't very complex (what i'm testing with) and when written to file in png format is approximately 600kb...
On the simulator, this code seems to work fine however when I run the app in tighter memory conditions (on my iPad) the process quits because it ran out of memory...
I've been using the same code in the app previously what was working fine (was only creating tiles for 3072x2244 images however)...
Either I must be doing something stupidly wrong or my #autoreleasepool's aren't working as they should (i think i mentioned that im using ARC)... When running in instruments I can just see the memory used climb up until ~500mb where it then crashes!
I've analysed the code and it hasn't found a single memory leak related to this part of my app so I'm really confused on why this is crashing on me...
Just a little history on how my function gets called so you know whats happening... The app uses CoreGraphics to render a UIView (4096x2992) with some UIImageView's inside it then it sends that UIImage reference into my function buildFromImage: (below) where it begins cutting up/resizing the image to create my file...
Here is the buildFromImage: code... the memory issues are built up from within the main loop under NSLog(#"LOG ------------> Begin tile loop ");...
-(void)buildFromImage:(UIImage *)__image {
NSLog(#"LOG ------------> Begin Build ");
//if the __image is over 4096 width of 2992 height then we must resize it! (stop crashes ect)
if (__image.size.width > __image.size.height) {
if (__image.size.width > 4096) {
__image = [self resizeImage:__image toSize:CGSizeMake(4096, (__image.size.height * 4096 / __image.size.width))];
}
} else {
if (__image.size.height > 2992) {
__image = [self resizeImage:__image toSize:CGSizeMake((__image.size.width * 2992 / __image.size.height), 2992)];
}
}
//create preview image (if landscape, no more than 748 high... if portrait, no more than 1004 high) must keep scale
NSString *temp_archive_store = [[NSString alloc] initWithFormat:#"%#/%i-temp_imgdat.zip",NSTemporaryDirectory(),arc4random()];
NSString *temp_tile_store = [[NSString alloc] initWithFormat:#"%#/%i-temp_tilestore/",NSTemporaryDirectory(),arc4random()];
//create the temp dir for the tile store
[[NSFileManager defaultManager] createDirectoryAtPath:temp_tile_store withIntermediateDirectories:YES attributes:nil error:nil];
//create each tile and add it to the compressor once its made
//size of tile
CGSize tile_size = CGSizeMake(256, 256);
//the scales that we will be generating the tiles too
NSMutableArray *scales = [[NSMutableArray alloc] initWithObjects:[NSNumber numberWithInt:1000],[NSNumber numberWithInt:500],[NSNumber numberWithInt:250],[NSNumber numberWithInt:125], nil]; //scales to loop round over
NSLog(#"LOG ------------> Begin tile loop ");
#autoreleasepool {
//loop through the scales
for (NSNumber *scale in scales) {
//scale the image
UIImage *imageForScale = [self resizedImage:__image scale:[scale intValue]];
//calculate number of rows...
float rows = ceil(imageForScale.size.height/tile_size.height);
//calulate number of collumns
float cols = ceil(imageForScale.size.width/tile_size.width);
//loop through rows and cols
for (int row = 0; row < rows; row++) {
for (int col = 0; col < cols; col++) {
NSLog(#"LOG ------> Creating Tile (%i,%i,%i)",col,row,[scale intValue]);
//build name for tile...
NSString *tile_name = [NSString stringWithFormat:#"%#_%i_%i_%i.png",#"image",[scale intValue],col,row];
#autoreleasepool {
//build tile for this coordinate
UIImage *tile = [self tileForRow:row column:col size:tile_size image:imageForScale];
//convert image to png data
NSData *tile_data = UIImagePNGRepresentation(tile);
[tile_data writeToFile:[NSString stringWithFormat:#"%#%#",temp_tile_store,tile_name] atomically:YES];
}
}
}
}
}
}
Here are my resizing/cropping functions too as these could also be causing the issue..
-(UIImage *)resizeImage:(UIImage *)inImage toSize:(CGSize)scale {
#autoreleasepool {
CGImageRef inImageRef = [inImage CGImage];
CGColorSpaceRef clrRf = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(NULL, ceil(scale.width), ceil(scale.height), CGImageGetBitsPerComponent(inImageRef), CGImageGetBitsPerPixel(inImageRef)*ceil(scale.width), clrRf, kCGImageAlphaPremultipliedFirst );
CGColorSpaceRelease(clrRf);
CGContextDrawImage(ctx, CGRectMake(0, 0, scale.width, scale.height), inImageRef);
CGImageRef img = CGBitmapContextCreateImage(ctx);
UIImage *image = [[UIImage alloc] initWithCGImage:img scale:1 orientation:UIImageOrientationUp];
CGImageRelease(img);
CGContextRelease(ctx);
return image;
}
}
- (UIImage *)tileForRow: (int)row column: (int)col size: (CGSize)tileSize image: (UIImage*)inImage
{
#autoreleasepool {
//get the selected tile
CGRect subRect = CGRectMake(col*tileSize.width, row * tileSize.height, tileSize.width, tileSize.height);
CGImageRef inImageRef = [inImage CGImage];
CGImageRef tiledImage = CGImageCreateWithImageInRect(inImageRef, subRect);
UIImage *tileImage = [[UIImage alloc] initWithCGImage:tiledImage scale:1 orientation:UIImageOrientationUp];
CGImageRelease(tiledImage);
return tileImage;
}
}
Now I never use to be that good with memory management, so I did take the time to read up on it and also converted my project to ARC to see if that could address my issues (that was a while ago) but from the results i get after profiling it in instruments I must be doing something STUPIDLY wrong for the memory to leak as bad as it does but I just can't see what i'm doing wrong.
If anybody can point out anything I may be doing wrong it would be great!
Thanks
Liam
(let me know if you need more info)
I use "UIImage+Resize". Inside #autoreleasepool {} it works fine with ARC in a loop.
https://github.com/AliSoftware/UIImage-Resize
-(void)compress:(NSString *)fullPathToFile {
#autoreleasepool {
UIImage *fullImage = [[UIImage alloc] initWithContentsOfFile:fullPathToFile];
UIImage *compressedImage = [fullImage resizedImageByHeight:1024];
NSData *compressedData = UIImageJPEGRepresentation(compressedImage, 75.0);
[compressedData writeToFile:fullPathToFile atomically:NO];
}
}

NSImage acting weird

Why is this code setting artistImage to an image with 0 width and 0 height?
NSURL *artistImageURL = [NSURL URLWithString:#"http://userserve-ak.last.fm/serve/252/8581581.jpg"];
NSImage *artistImage = [[NSImage alloc] initWithContentsOfURL:artistImageURL];
As Ken wrote, the DPI is messed up in this image. If you want to force NSImage to set the real image size (ignoring the DPI), use the method described at http://borkware.com/quickies/one?topic=NSImage:
NSBitmapImageRep *rep = [[image representations] objectAtIndex: 0];
NSSize size = NSMakeSize([rep pixelsWide], [rep pixelsHigh]);
[image setSize: size];
NSImage does load this fine for me, but that particular image has corrupt metadata. Its resolution according to the exif data is 7.1999997999228071e-06 dpi.
NSImage respects the DPI info in the file, so if you try to draw the image at its natural size, you'll get something 2520000070 pixels across.
Last I checked, NSImage's -initWithContentsOfURL: only works with file URLs. You'll need to retrieve the URL first, and then use -initWithData:
It is more or less guaranteed that .representations contains NSImageRep* (of course not always NSBitmapImageRep). To be on a safe side for future extensions one can write something like code below. And it also takes into account multiple representation (like in some .icns and .tiff files).
#implementation NSImage (Extension)
- (void) makePixelSized {
NSSize max = NSZeroSize;
for (NSObject* o in self.representations) {
if ([o isKindOfClass: NSImageRep.class]) {
NSImageRep* r = (NSImageRep*)o;
if (r.pixelsWide != NSImageRepMatchesDevice && r.pixelsHigh != NSImageRepMatchesDevice) {
max.width = MAX(max.width, r.pixelsWide);
max.height = MAX(max.height, r.pixelsHigh);
}
}
}
if (max.width > 0 && max.height > 0) {
self.size = max;
}
}
#end