How to access animated GIF's frames - objective-c

I have an animated GIF successfully loaded into an NSData or NSBitmapImageRep object. Reference for NSBitmapImageRep
I've figured how to return data like the number of frames in that gif using:
NSNumber *frames = [bitmapRep valueForProperty:#"NSImageFrameCount"];
However, I'm a bit confused as to how I can actually access that frame as its own object.
I think one of these two methods will help, but I'm not actually sure how they'll get the individual frame for me.
+ representationOfImageRepsInArray:usingType:properties:
– representationUsingType:properties:
Any help appreciated. Thanks

I've figured how to return data like the number of frames in that gif using:
NSNumber *frames = [bitmapRep valueForProperty:#"NSImageFrameCount"];
However, I'm a bit confused as to how I can actually access that frame as its own object.
To have access to a special frame indexOfFrame ( 0 <= indexOfFrame < [frames intValue] ) you only need to set the NSImageCurrentFrame and you are done. There is no need to use CG-functions or make copies of frames. You can stay in the object oriented Cocoa world. A small example shows the duration of all GIF frames:
NSNumber *frames = [bitmapRep valueForProperty:#"NSImageFrameCount"];
if( frames!=nil ){ // bitmapRep is a Gif imageRep
for( NSUInteger i=0; i<[frames intValue]; i++ ){
[bitmapRep setProperty:NSImageCurrentFrame
withValue:[NSNumber numberWithUnsignedInt:i] ];
NSLog(#"%2d duration=%#",
i, [bitmapRep valueForProperty:NSImageCurrentFrameDuration] );
}
}
Another example: write all frames of a GIF image as PNG files to the filesystem:
NSNumber *frames = [bitmapRep valueForProperty:#"NSImageFrameCount"];
if( frames!=nil ){ // bitmapRep is a Gif imageRep
for( NSUInteger i=0; i<[frames intValue]; i++ ){
[bitmapRep setProperty:NSImageCurrentFrame
withValue:[NSNumber numberWithUnsignedInt:i] ];
NSData *repData = [bitmapRep representationUsingType:NSPNGFileType
properties:nil];
[repData writeToFile:
[NSString stringWithFormat:#"/tmp/gif_%02d.png", i ] atomically:YES];
}
}

I've figured how to return data like the number of frames in that gif using:
NSNumber *frames = [bitmapRep valueForProperty:#"NSImageFrameCount"];
However, I'm a bit confused as to how I can actually access that frame as its own object.
As far as I know, you can't—not from an NSBitmapImageRep.
Instead, create a CGImageSource from the GIF data, and use CGImageSourceCreateImageAtIndex to extract each frame (preferably as you need it).
Alternatively, you might try setting the NSImageCurrentFrame property. If you need a rep for each frame, make as many copies as there are frames (minus one, since you have the original), and set each rep's current frame to a different number. But I haven't tried that, so I'm not sure it will actually work.
Basically, NSBitmapImageRep's GIF support is weird, so you should just use CGImageSource.
I think one of these two methods will help, but I'm not actually sure how they'll get the individual frame for me.
+ representationOfImageRepsInArray:usingType:properties:
– representationUsingType:properties:
No, those methods are for serializing an image (or image rep). They're for writing data out, not reading it in. (Notice what constants those methods expect in their type parameters.)

If you want to have a look at some working source code for a GIF decoder for iOS (works for MacOSX too) then you can find it AVGIF89A2MvidResourceLoader.m at github. The approach is to use the ImageIO framework and call CGImageSourceCreateWithData() along with CGImageSourceCreateImageAtIndex() to get access to the Nth gif image in the file. But, there are some tricky details related to detecting if a transparent pixel appears in the GIF and how to write the results to a file to avoid running out of memory if the GIF is really long that might not be obvious.

Related

How do I create a valid CGImageSourceRef from an ALAssetRepresentation?

I'm trying to use CGImageSourceCreateThumbnailAtIndex to efficiently create a resized version of an image. I have some existing code that does this with images from disk, and now I'm trying to use an image that comes from ALAssetsLibrary.
Here's my code:
ALAsset *asset;
ALAssetRepresentation *representation = [asset defaultRepresentation];
CGImageRef imageRef = [representation fullResolutionImage];
CGDataProviderRef provider = CGImageGetDataProvider(imageRef);
CGImageSourceRef sourceRef = CGImageSourceCreateWithDataProvider(provider, NULL);
NSDictionary *resizeOptions = #{
kCGImageSourceCreateThumbnailWithTransform : #YES,
kCGImageSourceCreateThumbnailFromImageAlways : #YES,
kCGImageSourceThumbnailMaxPixelSize : #(2100)
};
CGImageRef resizedImage = CGImageSourceCreateThumbnailAtIndex(source, 0, resizeOptions);
The problem is that resizedImage is null, and CGImageSourceGetCount(sourceRef) returns 0. The data provider does have quite a bit of data in it, though, and the data does appear to be valid image data. The ALAsset comes from an iPhone 4S camera roll.
What am I missing? Why does CGImageSourceCreateWithDataProvider() create an image source with 0 images?
CGImageSource is for deserializing serialized images, such as JPEGs, PNGs, and whatnot.
CGImageGetDataProvider returns (the provider of) the raw pixel data of the image. It does not return serialized bytes in some external format. CGImageSource has no way to know what pixel format (color space, bits-per-component, alpha layout, etc.) any given raw pixel data is in.
You could try getting the URL of the asset rep and giving that to CGImageSourceCreateWithURL. If that doesn't work (e.g., not a file URL), you'll have to run the image through a CGImageDestination and create a CGImageSource with wherever you put the output.
(The one other thing to try would be to see whether the rep's filename is actually a full path, the way Cocoa often misuses the term. But you probably shouldn't count on that.)
One thing you might try is the asset rep's CGImageWithOptions: method.
The documentation claims that:
This method returns the biggest, best representation available, unadjusted in any way.
But it says that about fullResolutionImage, too, and I'm not sure why this class would have both methods if they both do the same thing. I wonder if it's a copy-and-paste error.
Try CGImageWithOptions: with a bunch of thumbnail-creating options and see what happens.
Option #3 would be the rep's fullScreenImage. Depending on what sort of “thumbnail” you need, it may be cheaper and/or simpler to just use this, which will be no bigger than (approximately) the size of the device's screen.
This can also help...
ALAssetRepresentation* rep = [asset defaultRepresentation];
NSDictionary* options = [[NSDictionary alloc] initWithObjectsAndKeys:
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailWithTransform,
(id)kCFBooleanTrue, (id)kCGImageSourceCreateThumbnailFromImageAlways,
(id)[NSNumber numberWithDouble:400], (id)kCGImageSourceThumbnailMaxPixelSize,
nil];
CGImageRef image = [rep CGImageWithOptions:options];

Animating images in UIImageView takes too much memory

I am using 150 images in a sequence for animation .
Here is my code.
NSMutableArray *arrImages =[[NSMutableArray alloc]initWithCapacity:0];
for(int i = 0; i <=158; i++)
{
UIImage* image = [UIImage imageNamed:[NSString stringWithFormat:#"baby%05d.jpg",i]];
[arrImages addObject:image];
}
babyimage.animationImages = arrImages;
[arrImages release];
babyimage.animationDuration=6.15;
[babyimage startAnimating];
but it is taking too much memory.After playing it for 1 minute it shows memory warnings in console. and then crashed.i have reduced images resolution also and i can't make it less then 150 for better quality.
Is there any better way to do this animation without memory issue.
Thanks a lot
plz help ...
Instead of
[UIImage imageNamed:[NSString stringWithFormat:#"baby%05d.jpg",i]]
use
[UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"baby%05d.jpg",i] ofType:nil]]
Reason is imageNames caches image and does not release until it release memory warning
Edit
Also, don't store entire image into array, just save image name if you want or don't save anything. This will also take much memory.
I am unaware as to what the animation is you haven't specified it. But if I were doing the code, I would follow the following algorithm.
This is assuming there's only one image shown at a particular time but during transition, you will end up having two convolving in a way.
procedureAnimate:
photo_1 = allocate(file_address_array[i++])
while i < MAX_PHOTOS
photo_2 = allocate(file_address_array[i++])
photo_3 = allocate(file_address_array[i++])
perform_animation(photo_1, photo_2)
release(photo_1)
photo_1 = photo_2
photo_2 = photo_3
release(last_two)
I not sure if this is the_perfect_way of doing such a thing. But this will be a lot more efficient.
(May be there's something terribly wrong with the alloc/release login but this is consistent with my working at 5.14 AM in the morning). Let me know if this doesn't work.

multiple GLKview snapshot cause crash. Memory issue?

My goal is to make a video out of a short sequence of opengl frames (around 200 frames). So in order to do this, I use the following code to create a array of images:
NSMutableArray* images = [NSMutableArray array];
KTEngine* engine = [KTEngine sharedInstance]; //Opengl - based engine
for (unsigned int i = engine.animationContext.unitStart; i < engine.animationContext.unitEnd ; ++i)
{
NSLog(#"Render Image %d", i);
[engine.animationContext update:i];
[self.view setNeedsDisplay];
[images addObject:[view snapshot]];
}
NSLog(#"Total image rendered %d", [images count]);
[self createVideoFileFromArray:images];
So this works perfectly fine on simulator, but not on device (retina iPad). So my guess is that the device does not support so many UIimages (specially in 2048*1536). The crash always happends after 38 frames or so.
Now as for the solution, I thought to create a video for each 10 frames, and then attached them all together, but when can I know if I have enough space (is the autorelease pool drained?).
Maybe I should use a thread, process 10 images, and fire it again for the next 10 frames once it's over?
Any idea?
It seems quite likely that you're running out of memory.
To reduce memory usage, you could try to store the images as NSData using the PNG or JPG format instead. Both PNG's and JPG's are quite small when represented as data, but loading them into UIImage objects can be very memory consuming.
I would advice you to do something like below in your loop. The autorelease pool is needed to drain the returned snapshot on each iteration.
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
UIImage *image = [view snapshot];
NSData *imageData = UIImagePNGRepresentation(image);
[images addObject:imageData];
[pool release];
This of course requires your createVideoFileFromArray: method to handle pure image data instead of UIImage objects, but that should probably be feasible to implement.

How to save apps data Xcode

I have been searching for many days on how to save my apps data. I found some stuff but it was very complicated and badly explained. I need that when I completely close my apps all the data I entered in the text field are still there when I open my apps again. I tried a tutorial but this only let me save about 8 textfields and I need to save thousands I am starting Objective-C and Xcode so if somebody want to give me an answer please make it very precise.
Alright, what I'd suggest would be putting all the data from your text fields into an array and saving that to a file, then loading it when you re-open the app.
The first thing you need is a save file. This function will create one for you.
-(NSString*) saveFilePath{
NSString* path = [NSString stringWithFormat:#"%#%#",
[[NSBundle mainBundle] resourcePath],
#"myfilename.plist"];
return path;}
Now that that's done you need to create your saving array. Hopefully you have your thousands of textfields already fitted into an array of some sort. If not, this will be a painful process regardless of how you tackle it. But anyway... (Here, labelArray will be the array of all your text fields/labels/etc.)
NSMutableArray* myArray = [[NSMutableArray alloc]init];
int i = 0;
while(i < labelArray.count){
[myArray addObject: [labelArray objectAtIndex: i].text];
i ++;
}
[myArray writeToFile:[self saveFilePath] atomically:YES];
[myArray release];
And the loading code would be something along the lines of
NSMutableArray* myArray = [[NSMutableArray arrayWithContentsOfFile:[self saveFilePath]]retain];
Then you'd simply load the data back into your array of text fields.
Hope this helps.
It sounds like your application architecture may be unsound if you are planning on saving thousands of text fields' data in the fraction of a second you get while your app is closing. It would probably be better to save these as the user enters the data instead of waiting to save all the data at once.
To get the path you are going to write ( or read from! ) to, you do the following:
NSString *writableDBPath = [[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent:#"MyFile.extension"];
And then use a method like "writeToFile:automically:" of NSString or NSDictionary etc. to write to the path.

PDF writing from iPad

I am able to generate a PDF with the following code. Can someone please help me with how I would add text to this? Thanks in advance.
-(void)createPDFfromUIView:(UIView*)aView saveToDocumentsWithFileName:(NSString*)aFilename
{
// Creates a mutable data object for updating with binary data, like a byte array
NSMutableData *pdfData = [NSMutableData data];
// Points the pdf converter to the mutable data object and to the UIView to be converted
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil);
UIGraphicsBeginPDFPage();
// draws rect to the view and thus this is captured by UIGraphicsBeginPDFContextToData
[aView drawRect:aView.bounds];
// remove PDF rendering context
UIGraphicsEndPDFContext();
// Retrieves the document directories from the iOS device
NSArray* documentDirectories = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,YES);
NSString* documentDirectory = [documentDirectories objectAtIndex:0];
NSString* documentDirectoryFilename = [documentDirectory stringByAppendingPathComponent:aFilename];
// instructs the mutable data object to write its context to a file on disk
[pdfData writeToFile:documentDirectoryFilename atomically:YES];
NSLog(#"documentDirectoryFileName: %#",documentDirectoryFilename);
}
I've never done this (yet, I plan getting on it tomorrow). But I think the right place to go is the Quartz 2D programming guide. It's also available from your developer documentation if you need it offline.
Basically you the same as you posted but appart from calling drawRect: you perform all the text drawing you want on that context.
There's useful information on writing simple strings on this guide
I hope it helps. It sure helped me!