Changing the size of a UIImage - objective-c

so I have this function:
- (UIImage*)imageByCombiningImage:(UIImage*)firstImage withImage:(UIImage*)secondImage {
UIImage *image = nil;
UIImageView *temp_img = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 400, 400)];
temp_img.image = secondImage;
secondImage = temp_img.image;
CGSize newImageSize = CGSizeMake(MAX(firstImage.size.width, secondImage.size.width), MAX(firstImage.size.height, secondImage.size.height));
if (UIGraphicsBeginImageContextWithOptions != NULL) {
UIGraphicsBeginImageContextWithOptions(newImageSize, NO, [[UIScreen mainScreen] scale]);
} else {
UIGraphicsBeginImageContext(newImageSize);
}
[firstImage drawAtPoint:CGPointMake(roundf((newImageSize.width-firstImage.size.width)/2),
roundf((newImageSize.height-firstImage.size.height)/2))];
[secondImage drawAtPoint:CGPointMake(roundf((100)),
roundf((100)))];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Im trying to change the size of the secondImage UIImage so that when they merge I can adjust the size to make it look good.
Any help is appreciated thanks guys!

Note the "0" in UIGraphicsBeginImageContextWithOptions:
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
...
// make anySize the size you want and pt any point
CGContextDrawImage(context, (CGRect){ pt1, anySize1 }, [first CGImage]);
// make anySize the size you want and pt any point
CGContextDrawImage(context, (CGRect){ pt2, anySize2 }, [secondImage CGImage]);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;

Related

Create CGImage From CGBitmapContext and Add to UIImageView

I am trying to create a snapshot of a UICollectionViewCell by creating a CGBitMapContext. I am not entirely clear on how to do this or how to use the associated classes, but after a bit of research, I have written the following method which is called from inside my UICollectionViewCell subclass:
- (void)snapShotOfCell
{
float scaleFactor = [[UIScreen mainScreen] scale];
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, self.frame.size.width * scaleFactor, self.frame.size.height * scaleFactor, 8, self.frame.size.width * scaleFactor * 4, colorSpace, kCGImageAlphaPremultipliedFirst);
CGImageRef image = CGBitmapContextCreateImage(context);
UIImage *snapShot = [[UIImage alloc]initWithCGImage:image];
UIImageView *imageView = [[UIImageView alloc]initWithFrame:self.frame];
imageView.image = snapShot;
imageView.opaque = YES;
[self addSubview:imageView];
CGImageRelease(image);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
}
The result is that the image does not appear. Upon debugging, I can determine that I have a valid (non nil) context, CGImage, UIImage and UIImageView, but nothing appears onscreen. Can someone tell me what I am missing?
You can add this as a category to UIView and it will be accessible for any view
- (UIImage*) snapshot
{
UIGraphicsBeginImageContextWithOptions(self.frame.size, YES /*opaque*/, 0 /*auto scale*/);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
Then you just need to do [self addSubview:[[UIImageView alloc] initWithImage:self.snapshot]] from you cell object.
[EDIT]
Providing the need for asynchronous rendering (totally understandable) this can be achieved using dispatch queues. I think this would work:
typedef void(^ImageOutBlock)(UIImage* image);
- (void) snapshotAsync:(ImageOutBlock)block
{
CGFloat scale = [[UIScreen mainScreen] scale];
CALayer* layer = self.layer;
CGRect frame = self.frame;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^() {
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, frame.size.width * scaleFactor, frame.size.height * scaleFactor, 8, frame.size.width * scaleFactor * 4, colorSpace, kCGImageAlphaPremultipliedFirst);
UIGraphicsBeginImageContextWithOptions(frame.size, YES /*opaque*/, scale);
[layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
dispatch_async(dispatch_get_main_queue(), ^() {
block(image);
});
});
}
[EDIT]
- (void) execute
{
__weak typeof(self) weakSelf = self;
[self snapshotAsync:^(UIImage* image) {
[weakSelf addSubview:[[UIImageView alloc] initWithImage:image]]
}];
}

Can UIimage change the size and resolution of pictures?

If I pick a 640*960 picture and put it into a 200*300 UIImage,then will that picture uploaded to services as a 200*300 picture .
You can use following methods to resize your image:
+ (UIImage *) imageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize {
//UIGraphicsBeginImageContext(newSize);
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0.0);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
+ (UIImage *) imageWithImage: (UIImage*) sourceImage scaledToWidth: (float) i_width {//method to scale image accordcing to width
float oldWidth = sourceImage.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = sourceImage.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight));
[sourceImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Create the image from a png file
imageOriginal = [UIImage imageNamed:#"bg-640by960.png"];
imageView =[[UIImageView alloc] initWithImage:imageOriginal];
imageView.tag=200;
// Get size of current image
size = [imageOriginal size];
// Frame location in view to show original image
[imageView setFrame:CGRectMake(0, 0, size.width, size.height)];
[[self view] addSubview:imageView];
[self.view bringSubviewToFront:btnCrop];
//To resize image call below method
[[self.view viewWithTag:200] removeFromSuperview];
[self squareImageWithImage:imageOriginal scaledToSize:CGSizeMake(200, 300)];
}
- (UIImage *)squareImageWithImage:(UIImage *)image scaledToSize:(CGSize)newSize
{
double ratio;
double delta;
CGPoint offset;
//make a new square size, that is the resized imaged width
CGSize sz = CGSizeMake(newSize.width, newSize.width);
//figure out if the picture is landscape or portrait, then
//calculate scale factor and offset
if (image.size.width > image.size.height) {
ratio = newSize.width / image.size.width;
delta = (ratio*image.size.width - ratio*image.size.height);
offset = CGPointMake(delta/2, 0);
} else {
ratio = newSize.width / image.size.height;
delta = (ratio*image.size.height - ratio*image.size.width);
offset = CGPointMake(0, delta/2);
}
//make the final clipping rect based on the calculated values
CGRect clipRect = CGRectMake(-offset.x, -offset.y,
(ratio * image.size.width) + delta,
(ratio * image.size.height) + delta);
//start a new context, with scale factor 0.0 so retina displays get
//high quality image
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)]) {
UIGraphicsBeginImageContextWithOptions(sz, YES, 0.0);
} else {
UIGraphicsBeginImageContext(sz);
}
UIRectClip(clipRect);
[image drawInRect:clipRect];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageView =
[[UIImageView alloc] initWithImage:newImage];
[imageView setFrame:CGRectMake(60, 60, newSize.width, newSize.height)];
[self.view addSubview:imageView];
return newImage;
}

Simple resizing of UIImage in XCODE

Is there any way how to resize UIImage in as few lines as possible? I don't mind of ratio, I just want to set image resolution to 80x60. That's all
This may be overkill but, you can simply take your image, and create a graphics context at that resolution you want, then you can set the tempImage as the UIImageView, overwriting it.
UIImage *image = YourImageView.image;
UIImage *tempImage = nil;
CGSize targetSize = CGSizeMake(80,60);
UIGraphicsBeginImageContext(targetSize);
CGRect thumbnailRect = CGRectMake(0, 0, 0, 0);
thumbnailRect.origin = CGPointMake(0.0,0.0);
thumbnailRect.size.width = targetSize.width;
thumbnailRect.size.height = targetSize.height;
[image drawInRect:thumbnailRect];
tempImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
YourImageView.image = tempImage;
- (void)resizeImage:(UIImage *)image
{
CGSize origImageSize = [image size];
CGRect imageRect = CGRectMake(0, 0, 80, 60);
float ratio = MAX(newRect.size.width / origImageSize.width,
newRect.size.height / origImageSize.height);
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:newRect
cornerRadius:5.0];
[path addClip];
CGRect imageRect;
imageRect.size.width = ratio * origImageSize.width;
imageRect.size.height = ratio * origImageSize.height;
imageRect.origin.x = (newRect.size.width - imageRect.size.width) / 2.0;
imageRect.origin.y = (newRect.size.height - imageRect.size.height) / 2.0;
[image drawInRect:imageRect];
UIImage *smallImage = UIGraphicsGetImageFromCurrentImageContext();
NSData *data = UIImagePNGRepresentation(smallImage);
UIGraphicsEndImageContext();
}
Do not forget to add CoreGraphics frameawork.
Use this class (add code to .h file accordingly)
#import "UIImage+Resize.h"
#implementation UIImage (Resize)
- (UIImage *)resizedImage:(CGSize)bounds {
return [self resizedImage:bounds upScale:YES];
}
- (UIImage*)resizedImage:(CGSize)bounds upScale:(BOOL)upScale {
CGSize originalSize = self.size;
float xScale = bounds.width / originalSize.width;
float yScale = bounds.height / originalSize.height;
float scale = MIN(xScale, yScale);
if (!upScale) {
scale = MIN(scale, 1);
}
CGSize newSize = CGSizeMake(originalSize.width * scale, originalSize.height * scale);
UIGraphicsBeginImageContextWithOptions(newSize, NO, [UIScreen mainScreen].scale);
[self drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *resultImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return resultImage;
}
I've found the following code on this page:
- (UIImage*)imageWithImage: (UIImage*) sourceImage scaledToWidth: (float) i_width{
if (sourceImage.size.width>sourceImage.size.height) {
sourceImage = [[UIImage alloc] initWithCGImage: sourceImage.CGImage
scale: 1.0
orientation: UIImageOrientationRight];
}
float oldWidth = sourceImage.size.width;
float scaleFactor = i_width / oldWidth;
float newHeight = sourceImage.size.height * scaleFactor;
float newWidth = oldWidth * scaleFactor;
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight)); [sourceImage drawInRect:CGRectMake(0, 0, newWidth, newHeight)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();
return newImage;
}
All you have to do is provide it with the i_width, and it will scale it accordingly.
I've added a little twist to it, so that if the picture is in landscape mode, it will be rotated to portrait, and then resized. If you want it to be the opposite (portrait to landscape), change this:
if (sourceImage.size.width>sourceImage.size.height) {
sourceImage = [[UIImage alloc] initWithCGImage: sourceImage.CGImage
scale: 1.0
orientation: UIImageOrientationRight];
}
to this:
if (sourceImage.size.height>sourceImage.size.width) {
sourceImage = [[UIImage alloc] initWithCGImage: sourceImage.CGImage
scale: 1.0
orientation: UIImageOrientationRight];
}
CAVEAT: my rotating method doesn't take into consideration if the picture is pointing left or right. In other words, If an image is landscape by upside down, my code can't recognise that. I hope someone else can shed some light on this though :)

iOS - Merging two images of different size

I'm facing the following problem : I have to merge two images A and B to create a new image C as a result of the merging.
I already know how to merge two images but in this case my goal is a little bit different.
I would like that image A will be the background for Image B.
For instance if image A size is 500x500 and image B size is 460x460 I would like that image C (the image result of the merging) will be 500x500, with image B (460x460) centered in it.
Thanks in advance for any help or suggestion
This is what I've done in my app, but without using UIImageView:
UIImage *bottomImage = [UIImage imageNamed:#"bottom.png"]; //background image
UIImage *image = [UIImage imageNamed:#"top.png"]; //foreground image
CGSize newSize = CGSizeMake(width, height);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.8];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
If the image already has opacity, you do not need to set it (as in bottomImage) otherwise you can set it (as with image).
After this UIImage is created then you can embed it in your UIImageView
UPDATE: Thanks to Ahmet AkkoK - for Swift (2.2) users blend mode macro has changed. CGBlendMode .kCGBlendModeNormal is replaced with CGBlendMode.Normal
Hey i got multiple images add same background with different foreground
This is my code
UIImage *bottomImage = [UIImage imageNamed:#"photo 2.JPG"]; //background image
UIImage *image = [UIImage imageNamed:#"photo 3.JPG"]; //foreground image
UIImage *image1 = [UIImage imageNamed:#"photo 4.JPG"]; //foreground image
UIImage *image2 = [UIImage imageNamed:#"photo 5.JPG"]; //foreground image
CGSize newSize = CGSizeMake(320, 480);
UIGraphicsBeginImageContext( newSize );
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.4];
[image1 drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.3];
[image2 drawInRect:CGRectMake(0,0,newSize.width,newSize.height) blendMode:kCGBlendModeNormal alpha:0.2];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
resultView = [[UIImageView alloc] initWithImage:newImage];
resultView.frame = CGRectMake(0, 0,320,460);
[self.view addSubview:resultView];
Swift version
Copy/Paste to Playground
var bottomImage:UIImage = UIImage(named:"avatar_4.png") //background image
var imageTop:UIImage = UIImage(named:"group_4.png") //top image
var newSize = CGSizeMake(bottomImage.size.width, bottomImage.size.height)
UIGraphicsBeginImageContext( newSize )
bottomImage.drawInRect(CGRectMake(0,0,newSize.width,newSize.height))
// decrease top image to 36x36
imageTop.drawInRect(CGRectMake(18,18,36,36), blendMode:kCGBlendModeNormal, alpha:1.0)
var newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
var imageData = UIImagePNGRepresentation(newImage)
To load images from playground:
Open playground file and create there folder Resources
copy images to this folder
Just made quick paste function for those of you who wanted to use Srikar Appal answer. (if in case background & foreground images are of different sizes)
- (UIImage *) mergeImages:(NSString *)bgImageFileName foreGround:(NSString *)fgImageFileName {
UIImage *bottomImage = [UIImage imageNamed:bgImageFileName]; //background image
UIImage *image = [UIImage imageNamed:fgImageFileName]; //foreground image
CGSize newSize = CGSizeMake(bottomImage.size.width, bottomImage.size.height);
UIGraphicsBeginImageContext(newSize);
// Use existing opacity as is
[bottomImage drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
// Apply supplied opacity if applicable
// Change xPos, yPos if applicable
[image drawInRect:CGRectMake(11,11,image.size.width,image.size.height) blendMode:kCGBlendModeNormal alpha:1.0];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
in Swift:
let bottomImage = UIImage(named: "Bottom_Image.png")
let frontImage = UIImage (named: "Front_Image.png")
let size = CGSize(width: 67, height: 55)
UIGraphicsBeginImageContext(size)
let areaSize = CGRect(x: 0, y: 0, width: size.width, height: size.height)
let frontImageSize = CGRect(x: 14, y: 3, width: 40, height: 40)
bottomImage!.drawInRect(areaSize, blendMode: CGBlendMode.Normal, alpha: 1.0)
frontImage!.drawInRect(frontImageSize, blendMode: CGBlendMode.Normal, alpha: 1.0)
let newImage:UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Thanks #Srikar Appal for iOS soultion.
For anyone who is looking for merge in OS X:
- (NSImage *) mergeImages:(NSString *)bgImageFileName foreGround:(NSString *)fgImageFileName {
NSImage *bottomImage = [NSImage imageNamed:bgImageFileName];
NSImage *overlayedImage = [NSImage imageNamed:fgImageFileName];
NSSize newSize = NSMakeSize(bottomImage.size.width, bottomImage.size.height);
NSSize overlaySize = NSMakeSize(newSize.width/2, newSize.height/2); //change the size according to your requirements
NSImage *newImage = [[NSImage alloc] initWithSize:newSize];
[newImage lockFocus];
[bottomImage drawInRect:NSMakeRect(0, 0, newSize.width, newSize.height)];
[overlayedImage drawInRect:NSMakeRect(newSize.width-overlaySize.width, 0, overlaySize.width, overlaySize.height)]; //set the position as required
[newImage unlockFocus];
return newImage;
}
You can go with another trick as described below:
Add first image to a imageView.
Add second image to another imageView.
Add both the above imageViews in a single main imageView and access the combined image by property of imageView : mainImageView.image
Have a look at the code below :
CGRect rect= investmentDetailTblView.frame;
int rows = investmentDetailArray.count;
CGFloat heightFinal = 5;
CGRect frame1;
for (int i=0; i<rows; i++)
{
frame1 = [investmentDetailTblView rectForRowAtIndexPath:[NSIndexPath indexPathForRow:i inSection:0]];
CGFloat height = frame1.size.height;
heightFinal = heightFinal + height;
}
rect.size.height = heightFinal;
investmentDetailTblView.frame=rect;
UIImageView *imageViewTable = [[UIImageView alloc] init];
[imageViewTable setFrame:CGRectMake(0, 0, frame1.size.width, heightFinal)];
[investmentDetailTblView reloadData];
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(investmentDetailTblView.bounds.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(investmentDetailTblView.bounds.size);
[investmentDetailTblView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
imageViewTable.image = image; //Adding the table image to the image view.
CGRect frame=CGRectMake(0, heightFinal+5, investmentDetailTblView.frame.size.width, 20) ;
UIView *footerView=[DataStore kkrLogoView];
footerView.frame=frame;
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(footerView.frame.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(footerView.frame.size);
[footerView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *kkrLogoImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *imageViewFooter = [[UIImageView alloc] init];
[imageViewFooter setFrame:CGRectMake(0, heightFinal, footerView.frame.size.width, footerView.frame.size.height)];
imageViewFooter.image = kkrLogoImage; //Adding the footer image to the image view.
UIImageView *mainImageView = [[UIImageView alloc] init];
[mainImageView setFrame:CGRectMake(0, 0, frame1.size.width, (heightFinal+footerView.frame.size.height))];
[mainImageView addSubview:imageViewTable];
[mainImageView addSubview:imageViewFooter];
if ([[UIScreen mainScreen] respondsToSelector:#selector(scale)])
UIGraphicsBeginImageContextWithOptions(mainImageView.frame.size, NO, [UIScreen mainScreen].scale);
else
UIGraphicsBeginImageContext(mainImageView.frame.size);
[mainImageView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Render large CATiledLayer into smaller area

I have a CATiledLayer of size 4096 x 4096 which I want to render into a PNG of size 1024 x 1024
this isnt doing it...
-(NSData *)createPNGFormat
{
UIGraphicsBeginImageContext(CGSizeMake(1024, 1024));
tiledLayer.transform = CATransform3DMakeScale(0.25, 0.25, 1.0);
[tiledLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return UIImagePNGRepresentation(image);
}
Any ideas on how to do this?
[First Hack] Brute Force , Render tile by tile. This works (slowly)
-(UIImage *)renderTileX:(NSInteger)xpos tileY:(NSInteger)ypos scale:(CGFloat)ascale
{
CGSize tiledsize = tiledLayer.tileSize;
CGRect tiledframe = tiledLayer.bounds;
CALayer *container = [CALayer layer];
container.frame = CGRectMake(0, 0, tiledsize.width,tiledsize.height);
UIGraphicsBeginImageContext(tiledsize);
[container addSublayer:tiledLayer];
tiledLayer.frame = CGRectMake(-tiledsize.width*xpos,-tiledsize.height*ypos, tiledframe.size.width, tiledframe.size.height);
[container renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[tiledLayer removeFromSuperlayer];
CGFloat fsize = tiledsize.width*ascale;
CGRect apicrect = CGRectMake(0,0,fsize,fsize);
UIGraphicsBeginImageContext(apicrect.size);
[image drawInRect:apicrect];
UIImage *thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return thumb;
}
-(UIImage *)renderTiledLayer
{
UIGraphicsBeginImageContext(CGSizeMake(1024, 1024));
for (NSInteger x = 0 ; x<4; x++) {
for (NSInteger y = 0; y < 4; y++) {
NSLog(#"render %d:%d",x,y);
UIImage *tile = [self renderTileX:x tileY:y scale:0.25];
[tile drawAtPoint:CGPointMake(x*256, y*256)];
}
}
UIImage *fimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return fimage;
}
[Using the Google...] The better way
-(UIImage *)createThumb
{
UIGraphicsBeginImageContext(CGSizeMake(1024, 1024));
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 0.25, 0.25);
[tiledLayer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *fimage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return fimage;
}