What's the fastest way to load big image on iPhone? - objective-c

HI there,
I am building a scrollview which swipes through 100 images of houses.
It works. But.... For every image viewed the allocated memory increases by 2.5 MB. In the end the app crashed because it ran out of memory.
I use the code for decompress the image.....
- (void)decompress {
const CGImageRef cgImage = [self CGImage];
const int width = CGImageGetWidth(cgImage);
const int height = CGImageGetHeight(cgImage);
const CGColorSpaceRef colorspace = CGImageGetColorSpace(cgImage);
const CGContextRef context = CGBitmapContextCreate(
NULL, /* Where to store the data. NULL = don’t care */
width, height, /* width & height */
8, width * 4, /* bits per component, bytes per row */
colorspace, kCGImageAlphaNoneSkipFirst);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage);
CGContextRelease(context);
}
but its not working,very time to take load the image.

I also face the same problem The solution I opted was I resize the image in 320*460 dimension and then the image size become approx 50 kb not more than that.You can also do the same thing.
Code for resizing image:-
suppose you have your image in image variable of UIImage and new image is stored in editedImage which is also UIImage object.
if (self.image.size.height>460 || self.image.size.width>320) {
UIGraphicsBeginImageContext(CGSizeMake(320,480));
[self.image drawInRect: CGRectMake(0, 0,320,480)];
self.editedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSLog(#"smallImage%f",self.editedImage.size.height);
NSLog(#"smallImage%f",self.editedImage.size.width);
}
else {
self.editedImage=self.image;
}

You need to do some memory optimization. Here is a logic to save memory but you need to implement it your self. Don't add all the images to scrollview just add initally 5 images. Suppose user is viewing image1 and now he reaches to image3 when user reaches image 3 remove image1 from scrollView so that the memory can be freed. And as user scrolls forward add next images like when user is on image4 add image6 and do vice versa. When user is goes from image4 to image3 remove image6 from memory. This way there is only 5 images will be in memory.

Three ways:
1: Trying Apple's sample code called "StreetScroller", then you'll get to know how to make scrollview work definitely.
2:Create thumbnails for large images and save to your own directory. Then read them from url.
3: Use UIpageviewcontroller

Related

Undo state of bitmap (CGContext)

I'm making a multiplayer game which involves drawing lines. Now i'm trying to implement online multiplayer into the game. However I've had some doing struggle doing this. The thing is that I will need to reverse the state of the draw lines in case a packet from the server comes late to the client. I've searched here on stack overflow but haven't found any real answer how to "undo" a bitmap context. The biggest problem is that the drawing needs to be done very fast since the game updates every 20th millisecond. However I figured out and tried some different approaches to this:
Save the state of the whole context and then redraw it. This is probably the slowest method.
Only save a part of the context (100x100) in a another bitmap hidden by looping through each pixel, then looping through each pixel from that bitmap to the main bitmap that is shown on the screen.
Save each point of the drawn path in a CGMutablePath ref, then when reverting the context, draw this path with a transparent color (0,0,0,0).
Saving the position in the bitmap of each pixel that gets drawn in a separate array and then setting that pixel alpha to 0 (in the drawn bitmap) when I need to undo.
The last approach should be the fastest of them all. However, I'm not sure how I can get the position of each drawn pixel unless i do it completely manual by. Right now I uses this code to drawn lines.
CGContextSetStrokeColorWithColor(cacheContext, [color CGColor]);
CGContextSetLineCap(cacheContext, kCGLineCapRound);
CGContextSetLineWidth(cacheContext, 6+thickness);
CGContextBeginPath(cacheContext);
CGContextMoveToPoint(cacheContext, point1.x, point1.y);
CGContextAddLineToPoint(cacheContext, point2.x, point2.y);
CGContextStrokePath(cacheContext);
CGRect dirtyPoint1 = CGRectMake(point1.x-10, point1.y-10, 20, 20);
CGRect dirtyPoint2 = CGRectMake(point2.x-10, point2.y-10, 20, 20);
[self setNeedsDisplayInRect:CGRectUnion(dirtyPoint1, dirtyPoint2)];
Here is how the CGBitmapcontext is setup
- (BOOL) initContext:(CGSize)size {
scaleFactor = [[UIScreen mainScreen] scale];
// scaleFactor = 1;
//non-retina
// scalefactor = 2; retina
int bitmapBytesPerRow;
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
bitmapBytesPerRow = (size.width * 4*scaleFactor);
bitmapByteCount = (bitmapBytesPerRow * (size.height*scaleFactor));
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
cacheBitmap = malloc( bitmapByteCount );
if (cacheBitmap == NULL){
return NO;
}
CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little;
colorSpace = CGColorSpaceCreateDeviceRGB();
cacheContext = CGBitmapContextCreate (cacheBitmap, size.width*scaleFactor, size.height *scaleFactor, 8, bitmapBytesPerRow, colorSpace, bitmapInfo);
CGContextScaleCTM(cacheContext, scaleFactor, scaleFactor);
CGColorSpaceRelease(colorSpace);
CGContextSetRGBFillColor(cacheContext, 0, 0, 0, 0.0);
CGContextFillRect(cacheContext, (CGRect){CGPointZero, CGSizeMake(size.height*scaleFactor, size.width*scaleFactor)});
return YES;
}
Is there anyway other better way to undo the bitmap? If not, how can I get the positions of each pixels that gets draw with core graphics? Is this even possible?
Your 4th approach will either duplicate the whole canvas bitmap (should you consider a flat NxM matrix representation) or result a performance mess in case of map-based structure or something like that.
Actually, I believe 2nd way does the trick. I have had implemented that way of undo few times during past years, including a DirectX-based drawing app with some 25-30fps rendering pipeline.
However, your #2 description has a strange mention of some "loop" you want to perform across the area. You do not need a loop, what you need is a proper API method for copying a portion of the bitmap/graphics context, it might be CGContextDrawImage used to preserve your canvas portion and same method to undo/redo the drawing.

Can you resize a UIImage on iOS whilst retaining scale? For thumbnail generation

My app deals with images taken from the camera. I want to be able to resize these images to thumbnail size so I can display them inside cells on table views.
Resizing "on the fly" seems rather slow, so I was planning to resize them at the point the user imports them into the app, and store both the full size and thumbnail on my business objects, using the thumbnail for things like table views
This is the code that I use for generating thumbnails :
#import "ImageUtils.h"
#implementation ImageUtils
+(UIImage*) generateThumbnailFromImage:(UIImage*)theImage
{
UIImage * thumbnail;
CGSize destinationSize = CGSizeMake(100,100);
UIGraphicsBeginImageContext(destinationSize);
[theImage drawInRect:CGRectMake(0,0,destinationSize.width, destinationSize.height)];
thumbnail = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumbnail;
}
#end
Whilst this appears to resize the image correctly, and greatly improves the responsiveness of my tables, the scaling of the images is off.
The above will create a UIImage of 100 * 100, how can I force it to use an AspectFit or AspectFill approach?
The UIImageView I have on my table cell is 100 * 100, so I need to resize the image to fit into that, without distorting it.
Any pointers would be great!
I realize this is an old thread, however if you stumble upon this, there is now a simpler method in iOS6. I lost a lot of time trying to use this solution until I found the following in Apple's documentation.
Use either:
+ (UIImage *)imageWithCGImage:(CGImageRef)imageRef
scale:(CGFloat)scale
orientation:(UIImageOrientation)orientation
or
+ (UIImage *)imageWithCIImage:(CIImage *)ciImage
scale:(CGFloat)scale
orientation:(UIImageOrientation)orientation
If you wanted to use this to make a thumbnail from a UIImage named "image", you can use one line of code:
UIImage *thumbnail = [UIImage imageWithCGImage:image.cgImage
scale:someScale
orientation:image.imageOrientation];
I've found that numbers greater than one shrink the image and numbers less than one expand it. It must use the scale as a denominator on the underlying size property.
Make sure you import the necessary frameworks!
Input:
imageSize // The image size, for example {1024,768}
maxThumbSize // The max thumbnail size, for example {100,100}
Pseudo code:
thumbAspectRatio = maxThumbSize.width / maxThumbSize.height
imageAspectRatio = imageSize.width / imageSize.height
if ( imageAspectRatio == thumbAspectRatio )
{
// The aspect ratio is equal
// Resize image to maxThumbSize
}
else if ( imageAspectRatio > thumbAspectRatio )
{
// The image is wider
// Thumbnail width: maxThumbSize.width
// Thumbnail height: maxThumbSize.height / imageAspectRatio
}
else if ( imageAspectRatio < thumbAspectRatio )
{
// The image is taller
// Thumbnail width: maxThumbSize.width * imageAspectRatio
// Thumbnail height: maxThumbSize.height
}

Creating thumbnail for an image grid

I'm building an app like the photo app by apple in the iPad. I have large full-screen image and I show them using a scrollView for managing zooming and paging. The main problem happen when I try to create a grid with the thumbnail of the images. I create them as UIImageView overlapped on a UIButton. All works great, but when I try the app on the iPad, it requires a lot of memory, I suppose it depend on the rescaling of the Image. There's a way to create a UIImageView with the little image,rescaling the larger image, without using so much memory?
You can use UIGraphics to create a thumbnail. Here's this code to do it:
UIGraphicsBeginImageContext(CGSizeMake(length, length));
CGContextRef currentContext = UIGraphicsGetCurrentContext();
CGContextClipToRect( currentContext, clippedRect);
CGFloat scaleFactor = length/sideFull;
if (widthGreaterThanHeight) {
//a landscape image – make context shift the original image to the left when drawn into the context
CGContextTranslateCTM(currentContext, -((mainImage.size.width - sideFull) / 2) * scaleFactor, 0);
}
else {
//a portfolio image – make context shift the original image upwards when drawn into the context
CGContextTranslateCTM(currentContext, 0, -((mainImage.size.height - sideFull) / 2) * scaleFactor);
}
//this will automatically scale any CGImage down/up to the required thumbnail side (length) when the CGImage gets drawn into the context on the next line of code
CGContextScaleCTM(currentContext, scaleFactor, scaleFactor);
[mainImageView.layer renderInContext:currentContext];
UIImage* thumbnail = UIGraphicsGetImageFromCurrentImageContext();

CGImageCreateWithMask with an image as a mask

I am trying to use an image (270 degrees of a circle, similar to a pacman logo, painted as Core Graphics) to create a mask. What I am doing is this
1. creating a Core Graphics path
CGContextSaveGState(context);
CGContextBeginPath(context);
CGContextMoveToPoint(context,circleCenter.x,circleCenter.y);
//CGContextSetAllowsAntialiasing(myBitmapContext, YES);
CGContextAddArc(context,circleCenter.x, circleCenter.y,circleRadius,startingAngle, endingAngle, 0); // 0 is counterclockwise
CGContextClosePath(context);
CGContextSetRGBStrokeColor(context,1.0,0.0,0.0,1.0);
CGContextSetRGBFillColor(context,1.0,0.0,0.0,0.2);
CGContextDrawPath(context, kCGPathFillStroke);
2. then I'm creating an image of the path that has the path just painted
CGImageRef pacmanImage = CGBitmapContextCreateImage (context);
3. restoring the context
CGContextRestoreGState(context);
CGContextSaveGState(context);
4. creating a 1 bit mask (which will provide the black-white mask)
bitsPerComponent = 1;
bitsPerPixel = bitsPerComponent * 1 ;
bytesPerRow = (CGImageGetWidth(imgToMaskRef) * bitsPerPixel);
mask = CGImageCreate(CGImageGetWidth(imgToMaskRef),
CGImageGetHeight(imgToMaskRef),
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
greyColorSpace,
kCGImageAlphaNone,
CGImageGetDataProvider(pacmanImage),
NULL, //decode
YES, //shouldInterpolate
kCGRenderingIntentDefault);
5. masking the imgToMaskRef (which is a CGImageRef imgToMaskRef =imgToMask.CGImage;) with the mask just created
imageMaskedWithImage = CGImageCreateWithMask(imgToMaskRef, mask);
CGContextDrawImage(context,imgRectBox, imageMaskedWithImage);
CGImageRef maskedImageFinal = CGBitmapContextCreateImage (context);
6. returning the maskedImageFinal to the caller of this method (as wheelChoiceMadeState, which is a CGImageRef) who then updates the CALayer contents property with the image
theLayer.contents = (id) wheelChoiceMadeState;
the problem I am seeing is that the mask does not work properly and looks very strange indeed. I get strange patterns across the path painted by the Core Graphics. My hunch is there is something with CGImageGetDataProvider() but I am not sure.
Any help would be appreciated
thank you
CGImageGetDataProvider does not change the data at all. If the data of pacmanImage does not exactly match the parameters passed to CGImageCreate (bitsPer,bytesPer,colorSpace,...) the result is undefined. If it does exactly match, there would be no point in creating mask.
You need to create a grayscale CGBitmapContext to draw the mask into, and a CGImage that uses the same pixels and parameters as the bitmap. You can then use the CGImage to mask another image.
Only use CGBitmapContextCreateImage if you want a snapshot of a CGBitmapContext that you will continue to modify. For a single use bitmap, pass the same buffer to the bitmap and the matching CGImage you create.
Edit:
finalRect is the size the final image should be. It is either large enough to hold the original image, and the pacman is positioned inside it, or it is large enough to hold the pacman, and the original image is cropped to fit. In this example, the original image is cropped. Otherwise the pacman path would have to be positioned relative to the original image.
maskContext = CGBitmapContextCreate( ... , finalRect.size.width , finalRect.size.height , ... );
// add the pacman path and set the stroke and fill colors
CGContextDrawPath( maskContext , kCGPathFillStroke );
maskImage = CGBitmapContextCreateImage( maskContext );
imageToMask = CGImageCreateWithImageInRect( originalImage , finalRect );
finalImage = CGImageCreateWithMask( imageToMask , maskImage );

crop image from certain portion of screen in iphone programmatically

NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
CGSize contextSize=CGSizeMake(320,400);
UIGraphicsBeginImageContext(self.view.bounds.size);
UIGraphicsBeginImageContext(contextSize);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *savedImg = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self setSaveImage:savedImg];
to extarct some part of image from main screen.
In UIGraphicsBeginImageContext I can only use size, is there any way to use CGRect or some other way to extract image from a specific portion of screen ie (x,y, 320, 400) some thing like this
Hope this helps:
// Create new image context (retina safe)
UIGraphicsBeginImageContextWithOptions(size, NO, 0.0);
// Create rect for image
CGRect rect = CGRectMake(x, y, size.width, size.height);
// Draw the image into the rect
[existingImage drawInRect:rect];
// Saving the image, ending image context
UIImage * newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
This question is really a duplicate of several other questions including this: How to crop the UIImage?, but since it took me a while to find a solution, I will cross post again.
In my quest for a solution that I could more easily understand (and written in Swift), I arrived at this:
I wanted to be able to crop from a region based on an aspect ratio, and scale to a size based on a outer bounding extent. Here is my variation:
import AVFoundation
import ImageIO
class Image {
class func crop(image:UIImage, crop source:CGRect, aspect:CGSize, outputExtent:CGSize) -> UIImage {
let sourceRect = AVMakeRectWithAspectRatioInsideRect(aspect, source)
let targetRect = AVMakeRectWithAspectRatioInsideRect(aspect, CGRect(origin: CGPointZero, size: outputExtent))
let opaque = true, deviceScale:CGFloat = 0.0 // use scale of device's main screen
UIGraphicsBeginImageContextWithOptions(targetRect.size, opaque, deviceScale)
let scale = max(
targetRect.size.width / sourceRect.size.width,
targetRect.size.height / sourceRect.size.height)
let drawRect = CGRect(origin: -sourceRect.origin * scale, size: image.size * scale)
image.drawInRect(drawRect)
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return scaledImage
}
}
There are a couple things that I found confusing, the separate concerns of cropping and resizing. Cropping is handled with the origin of the rect that you pass to drawInRect, and scaling is handled by the size portion. In my case, I needed to relate the size of the cropping rect on the source, to my output rect of the same aspect ratio. The scale factor is then output / input, and this needs to be applied to the drawRect (passed to drawInRect).
One caveat is that this approach effectively assumes that the image you are drawing is larger than the image context. I have not tested this, but I think you can use this code to handle cropping / zooming, but explicitly defining the scale parameter to be the aforementioned scale parameter. By default, UIKit applies a multiplier based on the screen resolution.
Finally, it should be noted that this UIKit approach is higher level than CoreGraphics / Quartz and Core Image approaches, and seems to handle image orientation issues. It is also worth mentioning that it is pretty fast, second to ImageIO, according to this post here: http://nshipster.com/image-resizing/