I have an application that pulls images from an NSURL. Is it possible to inform the application that they are retina ('#2x') versions (the images are of retina resolution)? I currently have the following but the images appear pixelated on the higher resolution displays:
NSURL *url = [NSURL URLWithString:self.imageURL];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
self.pictureImageView.image = image;
You need to rescale the UIImage before adding it to the image view.
NSURL *url = [NSURL URLWithString:self.imageURL];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
CGFloat screenScale = [UIScreen mainScreen].scale;
if (image.scale != screenScale)
image = [UIImage imageWithCGImage:image.CGImage scale:screenScale orientation:image.imageOrientation];
self.pictureImageView.image = image;
It's best to avoid hard-coding the scale value, thus the UIScreen call. See Apple’s documentation on UIImage’s scale property for more information about why this is necessary.
It’s also best to avoid using NSData’s -dataWithContentsOfURL: method (unless your code is running on a background thread), as it uses a synchronous network call which cannot be monitored or cancelled. You can read more about the pains of synchronous networking and the ways to avoid it in this Apple Technical Q&A.
Try using imageWithData:scale: (iOS 6 and later)
NSData *imageData = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:imageData scale:[[UIScreen mainScreen] scale]];
You need to set the scale on the UIImage.
UIImage* img = [[UIImage alloc] initWithData:data];
CGFloat screenScale = [UIScreen mainScreen].scale;
if (screenScale != img.scale) {
img = [UIImage imageWithCGImage:img.CGImage scale:screenScale orientation:img.imageOrientation];
}
The documentation says to be careful to construct all your UIImages at the same scale, otherwise you might get weird display issues where things show at half size, double size, half resolution, et cetera. To avoid all that, load all UIImages at retina resolution. Resources will be loaded at the correct scale automatically. For UIImages constructed from URL data, you need to set it.
Just to add to this, what I did specifically was the following, in the same situation, works like a charm.
double scaleFactor = [UIScreen mainScreen].scale;
NSLog(#"Scale Factor is %f", scaleFactor);
if (scaleFactor==1.0) {
[cell.videoImageView setImageWithURL:[NSURL URLWithString:regularThumbnailURLString];
}else if (scaleFactor==2.0){
[cell.videoImageView setImageWithURL:[NSURL URLWithString:retinaThumbnailURLString];
}
#2x convention is just convenient way for loading images from application bundle.
If you wan't to show image on retina display then you have to make it 2x bigger:
Image size 100x100
View size: 50x50.
Edit: i think if you're loading images from server the best solution would be adding some additional param (e.g. scale) and return images of the appropriate size:
www.myserver.com/get_image.php?image_name=img.png&scale=2
You can obtain scale using [[UIScreen mainScreen] scale]
To tell the iPhone programmatically that particular image is Retina, you can do something like this:
UIImage *img = [self getImageFromDocumentDirectory];
img = [UIImage imageWithCGImage:img.CGImage scale:2 orientation:img.imageOrientation];
In my case, TabBarItem image was dynamic i.e. that was downloading from server. Then the iOS cannot identify it as retina. The above code snippet worked for me like a charm.
Related
I'm using a bunch of CIFilter filters in my app to adjust brightness, saturation etc' and they are working fine. I'm having some issues with inputSharpness. If I touch the sharpness slider the picture just disappears. Relevant code:
UIImage *aUIImage = [imageView image];
CGImageRef aCGImage = aUIImage.CGImage;
aCIImage = [CIImage imageWithCGImage:aCGImage];
//Create context
context = [CIContext contextWithOptions:nil];
sharpFilter = [CIFilter filterWithName:#"CIAttributeTypeScalar" keysAndValues: #"inputImage", aCIImage, nil];
....
- (IBAction)sharpSliderChanged:(id)sender
{
//Set filter value
[sharpFilter setValue:[NSNumber numberWithFloat:sharpSlider.value] forKey:#"inputSharpness"];
//Convert CIImage to UIImage
outputImage = [sharpFilter outputImage];
CGImageRef cgimg = [context createCGImage:outputImage fromRect:[outputImage extent]];
newUIImage = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
//add image to imageView
[imageView setImage:newUIImage];
}
I've read a post with a similar question, there a possible solution was to add a category for the UIImage effect you want to provide. The only difference here is that you should use one of the CIColorControls Parameters: inputSharpness and the CISharpenLuminance filter.
Back to your question: It seems from your comments you have some problem about how you initialize your filter. I take a look to the official documentation and I would use CISharpenLuminance instead during the initialization phase. It is only available in ios 6 though.
EDIT
Like i said if you want to stick with core image the feature you want is available on iOS 6 only. I can recommend you to use a third party lib: GPU library from bradlarson if you want to be compatible with ios 5.
Is there any way in Objective-c/cocoa (OSX) to crop an image without changing the quality of the image?
I am very near to a solution, but there are still some differences that I can detect in the color. I can notice it when zooming into the text. Here is the code I am currently using:
NSImage *target = [[[NSImage alloc]initWithSize:panelRect.size] autorelease];
target.backgroundColor = [NSColor greenColor];
//start drawing on target
[target lockFocus];
[NSGraphicsContext saveGraphicsState];
[[NSGraphicsContext currentContext] setImageInterpolation:NSImageInterpolationNone];
[[NSGraphicsContext currentContext] setShouldAntialias:NO];
//draw the portion of the source image on target image
[source drawInRect:NSMakeRect(0,0,panelRect.size.width,panelRect.size.height)
fromRect:NSMakeRect(panelRect.origin.x , source.size.height - panelRect.origin.y - panelRect.size.height, panelRect.size.width, panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
[NSGraphicsContext restoreGraphicsState];
//end drawing
[target unlockFocus];
//create a NSBitmapImageRep
NSBitmapImageRep *bmpImageRep = [[[NSBitmapImageRep alloc]initWithData:[target TIFFRepresentation]] autorelease];
//add the NSBitmapImage to the representation list of the target
[target addRepresentation:bmpImageRep];
//get the data from the representation
NSData *data = [bmpImageRep representationUsingType: NSJPEGFileType
properties: imageProps];
NSString *filename = [NSString stringWithFormat:#"%#%#.jpg", panelImagePrefix, panelNumber];
NSLog(#"This is the filename: %#", filename);
//write the data to a file
[data writeToFile:filename atomically:NO];
Here is a zoomed-in comparison of the original and the cropped image:
(Original image - above)
(Cropped image - above)
The difference is hard to see, but if you flick between them, you can notice it. You can use a colour picker to notice the difference as well. For example, the darkest pixel on the bottom row of the image is a different shade.
I also have a solution that works exactly the way I want it in iOS. Here is the code:
-(void)testMethod:(int)page forRect:(CGRect)rect{
NSString *filePath = #"imageName";
NSData *data = [HeavyResourceManager dataForPath:filePath];//this just gets the image as NSData
UIImage *image = [UIImage imageWithData:data];
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], rect);//crop in the rect
UIImage *result = [UIImage imageWithCGImage:imageRef scale:0 orientation:image.imageOrientation];
CGImageRelease(imageRef);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [paths objectAtIndex:0];
[UIImageJPEGRepresentation(result, 1.0) writeToFile:[documentsDirectoryPath stringByAppendingPathComponent::#"output.jpg"] atomically:YES];
}
So, is there a way to crop an image in OSX so that the cropped image does not change at all? Perhaps I have to look into a different library, but I would be surprised if I could not do this with Objective-C...
Note, This is a follow up question to my previous question here.
Update I have tried (as per the suggestion) to round the CGRect values to whole numbers, but did not notice a difference. Here is the code in case I used:
[source drawInRect:NSMakeRect(0,0,(int)panelRect.size.width,(int)panelRect.size.height)
fromRect:NSMakeRect((int)panelRect.origin.x , (int)(source.size.height - panelRect.origin.y - panelRect.size.height), (int)panelRect.size.width, (int)panelRect.size.height)
operation:NSCompositeCopy
fraction:1.0];
Update I have tried mazzaroth code and it works if I save it as a png, but if I try and save it as a jpeg, the image loses quality. So close, but not close enough. Still hoping for a complete answer...
use CGImageCreateWithImageInRect.
// this chunk of code loads a jpeg image into a cgimage
// creates a second crop of the original image with CGImageCreateWithImageInRect
// writes the new cropped image to the desktop
// ensure that the xy origin of the CGRectMake call is smaller than the width or height of the original image
NSURL *originalImage = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:#"lockwood" ofType:#"jpg"]];
CGImageRef imageRef = NULL;
CGImageSourceRef loadRef = CGImageSourceCreateWithURL((CFURLRef)originalImage, NULL);
if (loadRef != NULL)
{
imageRef = CGImageSourceCreateImageAtIndex(loadRef, 0, NULL);
CFRelease(loadRef); // Release CGImageSource reference
}
CGImageRef croppedImage = CGImageCreateWithImageInRect(imageRef, CGRectMake(200., 200., 100., 100.));
CFURLRef saveUrl = (CFURLRef)[NSURL fileURLWithPath:[#"~/Desktop/lockwood-crop.jpg" stringByExpandingTildeInPath]];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(saveUrl, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, croppedImage, nil);
if (!CGImageDestinationFinalize(destination)) {
NSLog(#"Failed to write image to %#", saveUrl);
}
CFRelease(destination);
CFRelease(imageRef);
CFRelease(croppedImage);
I also made a gist:
https://gist.github.com/4259594
Try to change the drawInRect orign to 0.5,0.5. Otherwise Quartz will distribute each pixel color to the adjacent 4 fixels.
Set the color space of the target image. You might be having a different colorspace causing to to look slightly different.
Try the various rendering intents and see which gets the best result, perceptual versus relative colorimetric etc. There are 4 options I think.
You mention that the colors get modified by the saving of JPEG versus PNG.
You can specify the compression level when saving to JPEG. Try with something like 0.8 or 0.9. you can also save JPEG without compression with 1.0, but ther PNG has a distinct advantage. You specify the compression level in the options dictionary for CGImageDesinationAddImage.
Finally - if nothing her helps - you should open a TSI with DTS, they can certainly provide you with the guidance you seek.
The usual problem is that cropping sizes are float, but image pixels are integer.
cocoa interpolates it automatically.
You need to floor, round or ceil the size and coordinates to be sure that they are integer.
This may help.
I am doing EXIF deleting of JPG files and I think I caught the reason:
All losses and changes come from the re-compress of your image during saving to file.
You may notice the change too if you just to save the whole image again.
What I am to do is to read the original JPG and re-compress it to a quality that take equivalent file size.
I'm generating an UIImage as such:
//scale UIView size to match underlying UIImage size
float scaleFactor = 10.0
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, scaleFactor);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
The UIImage has a size of 3200x2400, which is what I want. However, when I convert to PNG format to send as an email attachment:
NSData* data = UIImagePNGRepresentation(image);
MFMailComposeViewController* controller;
...
[controller addAttachmentData:data mimeType:mimeType fileName:.fileName];
I end up with and image that is 720 ppi and thus ~12.8mb. Which is way too large.
I don't know where the 720 ppi is coming from, the UIImage is generated from an image that is 72 ppi. It must have something to do with:
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque,scaleFactor);
I need to create an UIImage from a UIView based on the underlying UIImage (which is much larger than the UIView's bounds), but I need to maintain the original ppi. 720 ppi is far too impractical for an email attachment.
Any thoughts?
Your scaleFactor is too high which results in large image data . Decrease scaleFactor and then take screenshot.
Basically it should be
float scaleFactor = 1.0;
Convert into PNG like:
NSData *imageData = UIImagePNGRepresentation(imagehere);
Attach imageData to mail.
EDIT : resize image like this:
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, 1.0);
[yourimageview.image drawInRect:CGRectMake(0,0,self.bounds.size)];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
As per eagle.dan.1349's recommendation, I tried the following:
-(UIImage*)convertViewToImage
{
UIImage* retVal = nil;
//create the graphics context
CGSize imageSize = targetImage.size;
CGSize viewSize = self.bounds.size;
//CGSize cvtSize = CGSizeMake(imageSize.width/viewSize.width, imageSize.height/viewSize.height);
float scale = imageSize.width/viewSize.width;
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, scale);
//write the contents of this view into the context
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
//get the image
retVal = UIGraphicsGetImageFromCurrentImageContext();
//close the graphics context
UIGraphicsEndImageContext();
NSData* data = UIImageJPEGRepresentation(retVal, 0.0);
[retVal release];
retVal = [UIImage imageWithData:data];
return retVal;
}
*Later on I perform:
NSData* data = UIImagePNGRepresentation(image);
However, as I mentioned, this still results in an image of 5.8 MB, so I suspect somewhere in the neighborhood of 300 ppi.
I need a UIImage, created from a UIView, at the resolution and size I require (72 ppi,3200X2400). There must be a way of doing this.
Firs, I wonder how your device don't cry with bloody tears from such HD images. When I worked on image-related project, such high resolution in PNG caused many problems with social network sharing and sending in email, so we moved to JPEG.
In addition, it is generally not recommended to send images on web in PNG, better make it JPEG with proper compression.
However, if you are required to use PNG you can make this kind of trick: first convert it to JPEG data, init your image with this data and than convert it to PNG.
Edit: In addition, try setting just context size you need 320X240 and scale not to 10, but to 0 for system to determine required scale. It may help. Then just scale your resulting UIImage once more.
I have a problem when trying to display small (16x16) UIImages, they appear a bit blurry.
These UIImages are favicons I download from different websites and I have compared them with some images from other apps and they're blurrier.
I'm displaying them on a custom UITableViewCell like below :
NSData *favicon = [NSData dataWithContentsOfURL:[NSURL URLWithString:[subscription faviconURL]]];
if([favicon length] > 0){
UIImage *img = [[UIImage alloc] initWithData:favicon];
CGSize size;
size.height = 16;
size.width = 16;
UIGraphicsBeginImageContext(size);
[img drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
cell.mainPicture.image = scaledImage;
}
Is there anything wrong with my custom UITableViewCell or the way I download the image ?
Thank you.
[EDIT 1] : By the way, .ico and .png look the same.
[EDIT 2] : I'm working on an iPad 2, so no Retina display.
When you display the resultant UIImage to the user, are you aligning the view on pixel boundaries?
theImage.frame = CGRectIntegral(theImage.frame);
Most of your graphics and text will appear to be blurry if your views are positioned or sized with non-integral values. If you run your app in the simulator, you can turn on "Color Misaligned Images" to highlight elements with bad offsets.
I need to horizontally flip some video I'm previewing and capturing. A-la iChat, I have a webcam and want it to appear as though the user is looking in a mirror.
I'm previewing Quicktime video in a QTCaptureView. My capturing is done frame-by-frame (for reasons I won't get into) with something like:
imageRep = [NSCIImageRep imageRepWithCIImage: [CIImage imageWithCVImageBuffer: frame]];
image = [[NSImage alloc] initWithSize: [imageRep size]];
[image addRepresentation: imageRep];
[movie addImage: image forDuration: someDuration withAttributes: someAttributes];
Any tips?
Nothing like resurrecting an old question. Anyway I came here and almost found what I was looking for thanks to Brian Webster but if anyone is looking for the wholesale solution try this after setting your class as the delegate of the QTCaptureView instance:
- (CIImage *)view:(QTCaptureView *)view willDisplayImage:(CIImage *)image {
//mirror image across y axis
return [image imageByApplyingTransform:CGAffineTransformMakeScale(-1, 1)];
}
You could do this by taking the CIImage you're getting from the capture and running it through a Core Image filter to flip the image around. You would then pass the resulting image into your image rep rather than the original one. The code would look something like:
CIImage* capturedImage = [CIImage imageWithCVImageBuffer:buffer];
NSAffineTransform* flipTransform = [NSAffineTransform transform];
CIFilter* flipFilter;
CIImage* flippedImage;
[flipTransform scaleByX:-1.0 y:1.0]; //horizontal flip
flipFilter = [CIImage filterWithName:#"CIAffineTransform"];
[flipFilter setValue:flipTransform forKey:#"inputTransform"];
[flipFilter setValue:capturedImage forKey:#"inputImage"];
flippedImage = [flipFilter valueForKey:#"outputImage"];
imageRep = [NSCIImageRep imageRepWithCIImage:flippedImage];
...
Try this!
it will apply filters to CaptureView, but not to the output video.
- (IBAction)Vibrance:(id)sender
{
CIFilter* CIVibrance = [CIFilter filterWithName:#"CIVibrance" keysAndValues:
#"inputAmount", [NSNumber numberWithDouble:2.0f],
nil];
mCaptureView.contentFilters = [NSArray arrayWithObject:CIVibrance];
}
btw, you can apply any filters from this ref: https://developer.apple.com/library/mac/#documentation/graphicsimaging/reference/CoreImageFilterReference/Reference/reference.html