I'm generating an UIImage as such:
//scale UIView size to match underlying UIImage size
float scaleFactor = 10.0
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, scaleFactor);
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
The UIImage has a size of 3200x2400, which is what I want. However, when I convert to PNG format to send as an email attachment:
NSData* data = UIImagePNGRepresentation(image);
MFMailComposeViewController* controller;
...
[controller addAttachmentData:data mimeType:mimeType fileName:.fileName];
I end up with and image that is 720 ppi and thus ~12.8mb. Which is way too large.
I don't know where the 720 ppi is coming from, the UIImage is generated from an image that is 72 ppi. It must have something to do with:
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque,scaleFactor);
I need to create an UIImage from a UIView based on the underlying UIImage (which is much larger than the UIView's bounds), but I need to maintain the original ppi. 720 ppi is far too impractical for an email attachment.
Any thoughts?
Your scaleFactor is too high which results in large image data . Decrease scaleFactor and then take screenshot.
Basically it should be
float scaleFactor = 1.0;
Convert into PNG like:
NSData *imageData = UIImagePNGRepresentation(imagehere);
Attach imageData to mail.
EDIT : resize image like this:
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, 1.0);
[yourimageview.image drawInRect:CGRectMake(0,0,self.bounds.size)];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
As per eagle.dan.1349's recommendation, I tried the following:
-(UIImage*)convertViewToImage
{
UIImage* retVal = nil;
//create the graphics context
CGSize imageSize = targetImage.size;
CGSize viewSize = self.bounds.size;
//CGSize cvtSize = CGSizeMake(imageSize.width/viewSize.width, imageSize.height/viewSize.height);
float scale = imageSize.width/viewSize.width;
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.opaque, scale);
//write the contents of this view into the context
[self.layer renderInContext:UIGraphicsGetCurrentContext()];
//get the image
retVal = UIGraphicsGetImageFromCurrentImageContext();
//close the graphics context
UIGraphicsEndImageContext();
NSData* data = UIImageJPEGRepresentation(retVal, 0.0);
[retVal release];
retVal = [UIImage imageWithData:data];
return retVal;
}
*Later on I perform:
NSData* data = UIImagePNGRepresentation(image);
However, as I mentioned, this still results in an image of 5.8 MB, so I suspect somewhere in the neighborhood of 300 ppi.
I need a UIImage, created from a UIView, at the resolution and size I require (72 ppi,3200X2400). There must be a way of doing this.
Firs, I wonder how your device don't cry with bloody tears from such HD images. When I worked on image-related project, such high resolution in PNG caused many problems with social network sharing and sending in email, so we moved to JPEG.
In addition, it is generally not recommended to send images on web in PNG, better make it JPEG with proper compression.
However, if you are required to use PNG you can make this kind of trick: first convert it to JPEG data, init your image with this data and than convert it to PNG.
Edit: In addition, try setting just context size you need 320X240 and scale not to 10, but to 0 for system to determine required scale. It may help. Then just scale your resulting UIImage once more.
Related
I am new to objective-c and cocoa programming. I am trying to generate image which will be 128mm in height and 128mm in width with 300 DPI resolution.
NSString *image = [[NSImage alloc] initWithSize:NSMakeSize(1512, 756)];
In above line of code 1512 and 756 are treated as points. So I am not able to convert it to what I need. It is creating image with (3024 * 1512) size.
Can you please suggest something...
Thanks in advance.
Here is the code which I tried
NSImage *image = [[NSImage alloc] initWithSize:NSMakeSize(2268, 2268)];
[image lockFocus];
// Draw Something
[image unlockFocus];
NSString* pathh = #"/Users/abcd/Desktop/Images/1234.bmp";
CGImageRef cgRef = [image CGImageForProposedRect:NULL
context:nil
hints:nil];
NSBitmapImageRep *newRep = [[NSBitmapImageRep alloc] initWithCGImage:cgRef];
NSSize copySize; // To change to 600 DPI
copySize.width = 600 * [newRep pixelsWide] / 72.0;
copySize.height = 600 * [newRep pixelsWide] / 72.0;
[newRep setSize:copySize]; //This input is not working
NSData *pngData = [newRep representationUsingType:NSBMPFileType properties:nil];
[pngData writeToFile:pathh atomically:YES];
Your question is somewhat confusing as you state you want a square image (128 x 128mm) and then attempt to construct a rectangular one (1512 x 756pts).
Guessing: it seems you may need to understand the difference between an NSImage and an NSImageRep and how they interact. Read Apple's Cocoa Drawing Guide, in particular the Images section. You may find the subsection Image Size and Resolution help to set the picture (no pun intended ;-)).
Another area you can read up on is printing - this often requires the generation of 300dpi images.
HTH
I've a UIImageView *userImage and UIImageView *imageSquare whose size is 320x320. The user will be able to play with userImage being able to change size & change position. imageSquare is static and placed in the middle of the screen and should be seen as the cropping view
The code below can crop userImage as the imageSquare original size but not with its new aspect ratio / scale.
I've been going crazy trying to do this but i cant find a way. How could I crop the current view (the one the user is manipulating) of userImage?
CGSize pageSize = imageSquare.frame.size;
UIGraphicsBeginImageContext(pageSize);
CGContextRef resizedContext = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(resizedContext, -imageSquare.frame.origin.x, -imageSquare.frame.origin.y);
[userImage.layer renderInContext:resizedContext];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
if (image != nil) {
NSLog(#"is not nil");
NSData *imgData = UIImagePNGRepresentation(image);
imageSquare.image = [[UIImage alloc]initWithData:imgData];
}
Your 320x320 cropping view should be a UIView with clipsToBounds ("Clip Subviews" in Interface Builder) set to YES, and then userImage should be one of its subviews.
When the user changes size/position, you change userImage.frame, and the cropping will be handled for you.
I have a problem when trying to display small (16x16) UIImages, they appear a bit blurry.
These UIImages are favicons I download from different websites and I have compared them with some images from other apps and they're blurrier.
I'm displaying them on a custom UITableViewCell like below :
NSData *favicon = [NSData dataWithContentsOfURL:[NSURL URLWithString:[subscription faviconURL]]];
if([favicon length] > 0){
UIImage *img = [[UIImage alloc] initWithData:favicon];
CGSize size;
size.height = 16;
size.width = 16;
UIGraphicsBeginImageContext(size);
[img drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
cell.mainPicture.image = scaledImage;
}
Is there anything wrong with my custom UITableViewCell or the way I download the image ?
Thank you.
[EDIT 1] : By the way, .ico and .png look the same.
[EDIT 2] : I'm working on an iPad 2, so no Retina display.
When you display the resultant UIImage to the user, are you aligning the view on pixel boundaries?
theImage.frame = CGRectIntegral(theImage.frame);
Most of your graphics and text will appear to be blurry if your views are positioned or sized with non-integral values. If you run your app in the simulator, you can turn on "Color Misaligned Images" to highlight elements with bad offsets.
I have an application that pulls images from an NSURL. Is it possible to inform the application that they are retina ('#2x') versions (the images are of retina resolution)? I currently have the following but the images appear pixelated on the higher resolution displays:
NSURL *url = [NSURL URLWithString:self.imageURL];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
self.pictureImageView.image = image;
You need to rescale the UIImage before adding it to the image view.
NSURL *url = [NSURL URLWithString:self.imageURL];
NSData *data = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:data];
CGFloat screenScale = [UIScreen mainScreen].scale;
if (image.scale != screenScale)
image = [UIImage imageWithCGImage:image.CGImage scale:screenScale orientation:image.imageOrientation];
self.pictureImageView.image = image;
It's best to avoid hard-coding the scale value, thus the UIScreen call. See Apple’s documentation on UIImage’s scale property for more information about why this is necessary.
It’s also best to avoid using NSData’s -dataWithContentsOfURL: method (unless your code is running on a background thread), as it uses a synchronous network call which cannot be monitored or cancelled. You can read more about the pains of synchronous networking and the ways to avoid it in this Apple Technical Q&A.
Try using imageWithData:scale: (iOS 6 and later)
NSData *imageData = [NSData dataWithContentsOfURL:url];
UIImage *image = [UIImage imageWithData:imageData scale:[[UIScreen mainScreen] scale]];
You need to set the scale on the UIImage.
UIImage* img = [[UIImage alloc] initWithData:data];
CGFloat screenScale = [UIScreen mainScreen].scale;
if (screenScale != img.scale) {
img = [UIImage imageWithCGImage:img.CGImage scale:screenScale orientation:img.imageOrientation];
}
The documentation says to be careful to construct all your UIImages at the same scale, otherwise you might get weird display issues where things show at half size, double size, half resolution, et cetera. To avoid all that, load all UIImages at retina resolution. Resources will be loaded at the correct scale automatically. For UIImages constructed from URL data, you need to set it.
Just to add to this, what I did specifically was the following, in the same situation, works like a charm.
double scaleFactor = [UIScreen mainScreen].scale;
NSLog(#"Scale Factor is %f", scaleFactor);
if (scaleFactor==1.0) {
[cell.videoImageView setImageWithURL:[NSURL URLWithString:regularThumbnailURLString];
}else if (scaleFactor==2.0){
[cell.videoImageView setImageWithURL:[NSURL URLWithString:retinaThumbnailURLString];
}
#2x convention is just convenient way for loading images from application bundle.
If you wan't to show image on retina display then you have to make it 2x bigger:
Image size 100x100
View size: 50x50.
Edit: i think if you're loading images from server the best solution would be adding some additional param (e.g. scale) and return images of the appropriate size:
www.myserver.com/get_image.php?image_name=img.png&scale=2
You can obtain scale using [[UIScreen mainScreen] scale]
To tell the iPhone programmatically that particular image is Retina, you can do something like this:
UIImage *img = [self getImageFromDocumentDirectory];
img = [UIImage imageWithCGImage:img.CGImage scale:2 orientation:img.imageOrientation];
In my case, TabBarItem image was dynamic i.e. that was downloading from server. Then the iOS cannot identify it as retina. The above code snippet worked for me like a charm.
I am trying to capture video from a camera. i have gotten the captureOutput:didOutputSampleBuffer: callback to trigger and it gives me a sample buffer that i then convert to a CVImageBufferRef. i then attempt to convert that image to a UIImage that i can then view in my app.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
/*We display the result on the custom layer*/
/*self.customLayer.contents = (id) newImage;*/
/*We display the result on the image view (We need to change the orientation of the image so that the video is displayed correctly)*/
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
self.capturedView.image = image;
/*We relase the CGImageRef*/
CGImageRelease(newImage);
}
the code seems to work fine up until the call to CGBitmapContextCreate. it always returns a NULL pointer. so consequently none of the rest of the function works. no matter what i seem to pass it the function returns null. i have no idea why.
The way that you are passing on the baseAddress presumes that the image data is in the form
ACCC
( where C is some color component, R || G || B ).
If you've set up your AVCaptureSession to capture the video frames in native format, more than likely you're getting the video data back in planar YUV420 format. (see: link text ) In order to do what you're attempting to do here, probably the easiest thing to do would be specify that you want the video frames captured in kCVPixelFormatType_32RGBA . Apple recommends that you capture the video frames in kCVPixelFormatType_32BGRA if you capture it in non-planar format at all, the reasoning for which is not stated, but I can reasonably assume is due to performance considerations.
Caveat: I've not done this, and am assuming that accessing the CVPixelBufferRef contents like this is a reasonable way to build the image. I can't vouch for this actually working, but I /can/ tell you that the way you are doing things right now reliably will not work due to the pixel format that you are (probably) capturing the video frames as.
If you need to convert a CVImageBufferRef to UIImage, it seems to be much more difficult than it should be unfortunately.
Essentially you need to first convert it to CIImage, then CGImage, and then finally UIImage. I wish I could tell you why. :)
-(void) screenshotOfVideoStream:(CVImageBufferRef)imageBuffer
{
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
createCGImage:ciImage
fromRect:CGRectMake(0, 0,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer))];
UIImage *image = [[UIImage alloc] initWithCGImage:videoImage];
[self doSomethingWithOurUIImage:image];
CGImageRelease(videoImage);
}
This particular method worked for me when I was converting H.264 video using the VTDecompressionSession callback to get the CVImageBufferRef (but it should work for any CVImageBufferRef). I was using iOS 8.1, XCode 6.2.
You can directly call:
self.yourImageView.image=[[UIImage alloc] initWithCIImage:[CIImage imageWithCVPixelBuffer:imageBuffer]];
Benjamin Loulier wrote a really good post on outputting a CVImageBufferRef under the consideration of speed with multiple approaches.
You can also find a working example on github ;)
How about back in time? ;)
Here you go: http://web.archive.org/web/20140426162537/http://www.benjaminloulier.com/posts/ios4-and-direct-access-to-the-camera