I need to split a big Image ( about 10000px Height ) in a number of smaller Images to use them as Textures for a OpenGL, below is the way I'm doing it right now, anybody got any ideas to do it faster, because it is taking quite long.
NSArray *images = [NSArray alloc] initWith
for (int i = 0; i<numberOfImages; i++){
int t = i*origHeight;
CGRect fromRect = CGRectMake(0, t, origWidth, origHeight); // or whatever rectangle
CGImageRef drawImage = CGImageCreateWithImageInRect(sourceImage.CGImage, fromRect);
UIImage *newImage = [UIImage imageWithData:UIImageJPEGRepresentation([UIImage imageWithCGImage:drawImage],1.0)];
[images addObject:newImage];
CGImageRelease(drawImage);
}
You can pre-split them before ie using the convert command with ImageMagick which you can get with brew
http://www.imagemagick.org/discourse-server/viewtopic.php?f=1&t=15771
Related
I have a bug when masking a rotated image. The bug wasn't present on iOS 8 (even when it was built with the iOS 9 SDK), and it still isn't present on non-retina iOS 9 iPads (iPad 2). I don't have any more retina iPads that are still on iOS 8, and in the meantime, I've also updated the build to support both 32bit and 64bit architectures. Point is, I can't be sure if the update to iOS 9 brought the bug, or the change to the both architectures setting.
The nature of the bug is this. I'm iterating through a series of images, rotating them by an angle determined by the number of segments I want to get out of the picture, and masking the picture on each rotation to get a different part of the picture. Everything works fine EXCEPT when the source image is rotated M_PI, 2*M_PI, 3*M_PI or 4*M_PI times (or multiples, i. e. 5*M_PI, 6*M_PI etc.). When It's rotated 1-3*M_PI times, the resulting images are incorrectly masked - the parts that should be transparent end up black. When it's rotated 4*M_PI times, the masking of the resulting image ends up with a nil image, thus crashing the application in the last step (where I'm adding the resulting image in an array).
I'm using CIImage for rotation, and CGImage masking for performance reasons (this combination showed best performance), so I would prefer keeping this choice of methods if at all possible.
UPDATE: The code shown below is run on a background thread.
Here is the code snippet:
CIContext *context = [CIContext contextWithOptions:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO] forKey:kCIContextUseSoftwareRenderer]];
for (int i=0;i<[src count];i++){
//for every image in the source array
CIImage * newImage = [[CIImage alloc] init];
if (IS_RETINA){
newImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
}else {
CIImage *tImage = [CIImage imageWithCGImage:[[src objectAtIndex:i] CGImage]];
newImage = [tImage imageByApplyingTransform:CGAffineTransformMakeScale(0.5, 0.5)];
}
float angle = angleOff*M_PI/180.0;
//angleOff is just the inital angle offset, If I want to start cutting from a different start point
for (int j = 0; j < nSegments; j++){
//for the given number of circle slices (segments)
CIImage * ciResult = [newImage imageByApplyingTransform:CGAffineTransformMakeRotation(angle + ((2*M_PI/ (2 * nSegments)) + (2*M_PI/nSegments) * (j)))];
//the way the angle is calculated is specific for the application of the image later. In any case, the bug happens when the resulting angle is M_PI, 2*M_PI, 3*M_PI and 4*M_PI
CGPoint center = CGPointMake([ciResult extent].origin.x + [ciResult extent].size.width/2, [ciResult extent].origin.y+[ciResult extent].size.width/2);
for (int k = 0; k<[src count]; k++){
//this iteration is also specific, it has to do with how much of the image is being masked in each iteration, but the bug happens irrelevant to this iteration
CGSize dim = [[masks objectAtIndex:k] size];
if (IS_RETINA && (floor(NSFoundationVersionNumber)>(NSFoundationVersionNumber_iOS_7_1))) {
dim = CGSizeMake(dim.width*2, dim.height*2);
}//this correction was needed after iOS 7 was introduced, not sure why :)
CGRect newSize = CGRectMake(center.x-dim.width*1/2,center.y+((circRadius + orbitWidth*(k+1))*scale-dim.height)*1, dim.width*1, dim.height*1);
//the calculation of the new size is specific to the application, I don't find it relevant.
CGImageRef imageRef = [context createCGImage:ciResult fromRect:newSize];
CGImageRef maskRef = [[masks objectAtIndex:k] CGImage];
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef),
NULL,
YES);
CGImageRef masked = CGImageCreateWithMask(imageRef, mask);
UIImage * result = [UIImage imageWithCGImage:masked];
CGImageRelease(imageRef);
CGImageRelease(masked);
CGImageRelease(maska);
[temps addObject:result];
}
}
}
I would be eternally grateful for any tips anyone might have. This bug has me puzzled beyond words :) .
My app assigns and displays an image inside of a UIImageView. This happens with 24 imageViews all at viewDidLoad. The images are assigned randomly from a list of fifty images. The view controller is pushed to modally from the main screen. The first time it takes a while to load. If I'm lucky, the view loads a second time. The third time, it almost always crashes. I've tried resizing the images to around 200 pixels. I've tried assigning the images with :
image1 = [UIImage imageNamed:#"image1.png"];
[self.imageView setImage: image1];
and also with:
NSString *imagePath = [[NSBundle mainBundle] pathForResource:#"image1" ofType:#"png"];
image1 = [[UIImage alloc] initWithContentsOfFile:imagePath];
This second one seemed to only make things worse.
I also tried running the app with Instruments, which didn't recognize any memory leaks.
I really don't know where else to turn. This app represents an enormous investment of time and I would really like to see this problem resolved...
Thank you so much
The most efficient way to load a smaller version of an image from disk is this: instead of using imageNamed:, use the Image I/O framework to request a thumbnail that is the actual size you'll be displaying, by calling CGImageSourceCreateThumbnailAtIndex. Here's the example from my book:
NSURL* url =
[[NSBundle mainBundle] URLForResource:#"colson"
withExtension:#"jpg"];
CGImageSourceRef src =
CGImageSourceCreateWithURL((__bridge CFURLRef)url, nil);
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat w = self.iv.bounds.size.width*scale;
NSDictionary* d =
#{(id)kCGImageSourceShouldAllowFloat: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailWithTransform: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailFromImageAlways: (id)kCFBooleanTrue,
(id)kCGImageSourceThumbnailMaxPixelSize: #((int)w)};
CGImageRef imref =
CGImageSourceCreateThumbnailAtIndex(src, 0, (__bridge CFDictionaryRef)d);
UIImage* im =
[UIImage imageWithCGImage:imref scale:scale
orientation:UIImageOrientationUp];
self.iv.image = im;
CFRelease(imref); CFRelease(src);
It is a huge waste of memory to ask a UIImageView to display an image larger than the UIImageView itself, as the bitmap for the full-size image must be maintained in memory. The Image I/O framework generates the smaller version without even ever unpacking the entire original image into memory as a bitmap.
I had this problem once before with images from a regular website that were way larger than the view I was using. The images are being uncompressed to their full resolution and then fit into the image view, if I remember correctly, hogging up your memory. I had to scale them down to the image view size first before showing them. Add CoreGraphics.framework and use this class to make an image object to use with your image view. I found it online and tweaked it a little looking for the same answer but don't remember where, so thanks to that person who posted the original, whoever they are.
ImageScale.h
#import <Foundation/Foundation.h>
#interface ImageScale : NSObject
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize;
#end
ImageScale.m
#import "ImageScale.h"
#implementation ImageScale
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize
{
CGFloat targetWidth = newSize.width;
CGFloat targetHeight = newSize.height;
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, M_PI_2); // + 90 degrees
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, -M_PI_2); // - 90 degrees
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, -M_PI); // - 180 degrees
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return newImage;
}
#end
I am trying to add a individual UiImageView for each item in an array this is what I have so far
_shapeArray = [NSArray arrayWithObjects:#"cloud.png",#"star.png", nil];
for (int i = 0; i < [_shapeArray count]; i++){
// I make a UIImage, which I will draw later
UIImage* image = [UIImage imageNamed:[NSString stringWithFormat:#"%#",[_shapeArray objectAtIndex:i]]];
UIImageView* blockView = [[UIImageView alloc] initWithImage:image];
blockView.frame = CGRectMake(arc4random()%320, arc4random()%460, image.size.width, image.size.height);
[self.view addSubview:blockView];
}
But as you can tell it just adds the last image in the array. I can not figure out a way to maybe add the array object number to the name of the UIImageView. Maybe I am going at it the wrong way, if so what would be the best way?
This code works, but you need to make sure of a couple of things:
- That the file name actually exists in your bundle (check for uppercase/lowercase), you would not get an error message if it didn't, but it would not show the picture
- That the image sizes are not too large and don't cover each other
You are adding the images in the same frame.
blockView.frame = CGRectMake(arc4random()%320+SomeXValue, arc4random()%460+SomeYvalue, image.size.width, image.size.height);
I have a problem when trying to display small (16x16) UIImages, they appear a bit blurry.
These UIImages are favicons I download from different websites and I have compared them with some images from other apps and they're blurrier.
I'm displaying them on a custom UITableViewCell like below :
NSData *favicon = [NSData dataWithContentsOfURL:[NSURL URLWithString:[subscription faviconURL]]];
if([favicon length] > 0){
UIImage *img = [[UIImage alloc] initWithData:favicon];
CGSize size;
size.height = 16;
size.width = 16;
UIGraphicsBeginImageContext(size);
[img drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
cell.mainPicture.image = scaledImage;
}
Is there anything wrong with my custom UITableViewCell or the way I download the image ?
Thank you.
[EDIT 1] : By the way, .ico and .png look the same.
[EDIT 2] : I'm working on an iPad 2, so no Retina display.
When you display the resultant UIImage to the user, are you aligning the view on pixel boundaries?
theImage.frame = CGRectIntegral(theImage.frame);
Most of your graphics and text will appear to be blurry if your views are positioned or sized with non-integral values. If you run your app in the simulator, you can turn on "Color Misaligned Images" to highlight elements with bad offsets.
In my app i am sending certain number of images from my device to other device.I achieved all this .Now what i want is that when i send only 1 image to other device ,then the frame of the image view should be the full screen.If i send 2 images then the frame should be like this;-2 images covering the whole scree.So the frame should change dynamically according to the number of images sent.Currently i am using table view to display the received images .What other option could be the best to achieve my target.Please help .Anyone done this type of work before ,please i need your help.
Thanks,
Daisy
You can do something like this:
CGFloat screenWidth = 320.0;
CGFloat screenHeight = 460.0;
NSArray imageNames = [NSArray arrayWithObjects:#"Picture1.png", #"Picture2.png", nil];
NSInteger numberOfImages = [imageNames size];
for (NSInteger j = 0; j < numberOfImages; ++j)
{
UIImageView *image = [[UIImageView alloc] initWithImage:[UIImage imageNamed:[imageNames objectAtIndex:j]]];
[image setFrame:CGRectMake(0.0, screenHeight / (CGFloat)numberOfImages * (CGFloat)j, screenWidth, screenHeight / (CGFloat)numberOfImages)];
[self addSubview:image];
[image release];
}
This example lists the images vertically. When you want also a horizontal list, then you have to use maths.
The code is not tested.