I use SDWebImage to set image like this:
[self.imageView sd_setImageWithURL:topicModel.large_image]
placeholderImage:nil
options:0
progress:^(NSInteger receivedSize, NSInteger expectedSize) {
} completed:^(UIImage *image,
NSError *error,
SDImageCacheType cacheType,
NSURL *imageURL) {
}];
after this , i set the imageView's contentMode like this:
if (topicModel.isBigPicture) {
self.imageView.contentMode = UIViewContentModeTop;
self.imageView.clipsToBounds = YES;
}else {
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
self.imageView.clipsToBounds = NO;
}
the size of the imageView is (355, 200),
the size of the image is bigger than imageView, both width and height.
As we all know,
If the topicModel is bigPicture, the contentMode will be UIViewContentModeTop.
UIViewContentModeTop remain same size, just positioned adjusted.
But i find a problem?
In my case, the size of image will convert to (300, XXX), XXX seems calculate by (image.realHeight * 300.0 / image.realWidth)
Actually i don't change the size of the image in my project.
Here is my question:
what's wrong with my code?
how to make the image's width same as the imageView's width?
Finally, I find the the answer.
And the reason is that i use the newest version of SDWebImage
Here is the solution :-)
if you want get the original image, you must change the code in SDWebImage/SDWebImageDecoder.m like this:
CGContextRelease(context);
//add this method
UIImage *decompressedImage = [UIImage imageWithCGImage:decompressedImageRef];
//delete this method
UIImage *decompressedImage = [UIImage imageWithCGImage:decompressedImageRef scale:image.scale orientation:image.imageOrientation];
CGImageRelease(decompressedImageRef);
return decompressedImage;
}
if you want to know more details ,You can see the below link
SDWebImage ---- Images from web-url too small when using v. 3.7.4 and above
Related
My app assigns and displays an image inside of a UIImageView. This happens with 24 imageViews all at viewDidLoad. The images are assigned randomly from a list of fifty images. The view controller is pushed to modally from the main screen. The first time it takes a while to load. If I'm lucky, the view loads a second time. The third time, it almost always crashes. I've tried resizing the images to around 200 pixels. I've tried assigning the images with :
image1 = [UIImage imageNamed:#"image1.png"];
[self.imageView setImage: image1];
and also with:
NSString *imagePath = [[NSBundle mainBundle] pathForResource:#"image1" ofType:#"png"];
image1 = [[UIImage alloc] initWithContentsOfFile:imagePath];
This second one seemed to only make things worse.
I also tried running the app with Instruments, which didn't recognize any memory leaks.
I really don't know where else to turn. This app represents an enormous investment of time and I would really like to see this problem resolved...
Thank you so much
The most efficient way to load a smaller version of an image from disk is this: instead of using imageNamed:, use the Image I/O framework to request a thumbnail that is the actual size you'll be displaying, by calling CGImageSourceCreateThumbnailAtIndex. Here's the example from my book:
NSURL* url =
[[NSBundle mainBundle] URLForResource:#"colson"
withExtension:#"jpg"];
CGImageSourceRef src =
CGImageSourceCreateWithURL((__bridge CFURLRef)url, nil);
CGFloat scale = [UIScreen mainScreen].scale;
CGFloat w = self.iv.bounds.size.width*scale;
NSDictionary* d =
#{(id)kCGImageSourceShouldAllowFloat: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailWithTransform: (id)kCFBooleanTrue,
(id)kCGImageSourceCreateThumbnailFromImageAlways: (id)kCFBooleanTrue,
(id)kCGImageSourceThumbnailMaxPixelSize: #((int)w)};
CGImageRef imref =
CGImageSourceCreateThumbnailAtIndex(src, 0, (__bridge CFDictionaryRef)d);
UIImage* im =
[UIImage imageWithCGImage:imref scale:scale
orientation:UIImageOrientationUp];
self.iv.image = im;
CFRelease(imref); CFRelease(src);
It is a huge waste of memory to ask a UIImageView to display an image larger than the UIImageView itself, as the bitmap for the full-size image must be maintained in memory. The Image I/O framework generates the smaller version without even ever unpacking the entire original image into memory as a bitmap.
I had this problem once before with images from a regular website that were way larger than the view I was using. The images are being uncompressed to their full resolution and then fit into the image view, if I remember correctly, hogging up your memory. I had to scale them down to the image view size first before showing them. Add CoreGraphics.framework and use this class to make an image object to use with your image view. I found it online and tweaked it a little looking for the same answer but don't remember where, so thanks to that person who posted the original, whoever they are.
ImageScale.h
#import <Foundation/Foundation.h>
#interface ImageScale : NSObject
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize;
#end
ImageScale.m
#import "ImageScale.h"
#implementation ImageScale
+ (UIImage*)imageWithImage:(UIImage*)sourceImage scaledToSize:(CGSize)newSize
{
CGFloat targetWidth = newSize.width;
CGFloat targetHeight = newSize.height;
CGImageRef imageRef = [sourceImage CGImage];
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
CGColorSpaceRef colorSpaceInfo = CGImageGetColorSpace(imageRef);
if (bitmapInfo == kCGImageAlphaNone) {
bitmapInfo = kCGImageAlphaNoneSkipLast;
}
CGContextRef bitmap;
if (sourceImage.imageOrientation == UIImageOrientationUp || sourceImage.imageOrientation == UIImageOrientationDown) {
bitmap = CGBitmapContextCreate(NULL, targetWidth, targetHeight, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
} else {
bitmap = CGBitmapContextCreate(NULL, targetHeight, targetWidth, CGImageGetBitsPerComponent(imageRef), CGImageGetBytesPerRow(imageRef), colorSpaceInfo, bitmapInfo);
}
if (sourceImage.imageOrientation == UIImageOrientationLeft) {
CGContextRotateCTM (bitmap, M_PI_2); // + 90 degrees
CGContextTranslateCTM (bitmap, 0, -targetHeight);
} else if (sourceImage.imageOrientation == UIImageOrientationRight) {
CGContextRotateCTM (bitmap, -M_PI_2); // - 90 degrees
CGContextTranslateCTM (bitmap, -targetWidth, 0);
} else if (sourceImage.imageOrientation == UIImageOrientationUp) {
// NOTHING
} else if (sourceImage.imageOrientation == UIImageOrientationDown) {
CGContextTranslateCTM (bitmap, targetWidth, targetHeight);
CGContextRotateCTM (bitmap, -M_PI); // - 180 degrees
}
CGContextDrawImage(bitmap, CGRectMake(0, 0, targetWidth, targetHeight), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage* newImage = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return newImage;
}
#end
In my project I have an NSImageView and I drop a picture from the finder on it. In an IBAction I want to take care if the picture size is bigger than the dimensions of the NSImageView as shown below.
I hoped that sender.size or sender.size.x or sender.size.width would give me the size of the NSImageView imageDropped but it doesn't.
Any ideas how I can get the dimensions of the NSImageView?
- (IBAction)imageWasDropped:(id)sender {
NSImage *theImage = [imageDropped image];
NSLog(#"%#", sender); // this returns <NSImageView: 0x10011d210>
NSSize _dimensions = [theImage size];
if (_dimensions.width > sender.size){
}
[imageOriginal setImage:theImage];
[picture_1 setImage:theImage];
}
This is very Simple, _imageView the outlet name of that imageWell where you drop or display your image.
NSLog(#"%f, %f", [[_imageView image]size].height, [[_imageView image]size].width);
EDIT:
For imageWell Size:
NSLog(#"Image well size is :%f x %f",[_imageView frame].size.height, [_imageView frame].size.width);
I have some pretty basic code to capture a still image using AVFoundation.
AVCaptureDeviceInput *newVideoInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self backFacingCamera] error:nil];
AVCaptureStillImageOutput *newStillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
AVVideoCodecJPEG, AVVideoCodecKey,
nil];
[newStillImageOutput setOutputSettings:outputSettings];
[outputSettings release];
AVCaptureSession *newCaptureSession = [[AVCaptureSession alloc] init];
[newCaptureSession beginConfiguration];
newCaptureSession.sessionPreset = AVCaptureSessionPreset640x480;
[newCaptureSession commitConfiguration];
if ([newCaptureSession canAddInput:newVideoInput]) {
[newCaptureSession addInput:newVideoInput];
}
if ([newCaptureSession canAddOutput:newStillImageOutput]) {
[newCaptureSession addOutput:newStillImageOutput];
}
self.stillImageOutput = newStillImageOutput;
self.videoInput = newVideoInput;
self.captureSession = newCaptureSession;
[newStillImageOutput release];
[newVideoInput release];
[newCaptureSession release];
My method that captures the still image is also pretty simple and prints out the orientation which is AVCaptureVideoOrientationPortrait:
- (void) captureStillImage
{
AVCaptureConnection *stillImageConnection = [AVCamUtilities connectionWithMediaType:AVMediaTypeVideo fromConnections:[[self stillImageOutput] connections]];
if ([stillImageConnection isVideoOrientationSupported]){
NSLog(#"isVideoOrientationSupported - orientation = %d", orientation);
[stillImageConnection setVideoOrientation:orientation];
}
[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:stillImageConnection
completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
ALAssetsLibraryWriteImageCompletionBlock completionBlock = ^(NSURL *assetURL, NSError *error) {
if (error) { // HANDLE }
};
if (imageDataSampleBuffer != NULL) {
CFDictionaryRef exifAttachments = CMGetAttachment(imageDataSampleBuffer, kCGImagePropertyExifDictionary, NULL);
if (exifAttachments) {
NSLog(#"attachements: %#", exifAttachments);
} else {
NSLog(#"no attachments");
}
self.stillImageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
self.stillImage = [UIImage imageWithData:self.stillImageData];
UIImageWriteToSavedPhotosAlbum(self.stillImage, self, #selector(image:didFinishSavingWithError:contextInfo:), nil);
}
else
completionBlock(nil, error);
}];
}
So the device understands it's in portrait mode as it should be, the exif attachements show me:
PixelXDimension = 640;
PixelYDimension = 480;
so it seems to know that we're in 640x480 and that means WxH (obviously...)
However when I email the photo to myself from Apples Photos app, I get a 480x640 image if I check the properties in Preview. This didn't make any sense to me until I dug further into image properties to find out that the image orientation is set to "6 (Rotated 90 degrees CCW)" I'm sure CCW is counter clockwise
So looking at the image in a browser:
http://tonyamoyal.com/stuff/things_that_make_you_go_hmm/photo.JPG
We see a the image rotated 90 degrees CCW and it is 640x480.
I'm really confused about this behavior. When I take a 640x480 still image using AVFoundation, I would expect the default to have no rotated orientation. I expect a 640x480 image oriented exactly as my eye sees the image in the preview layer. Can someone explain why this is happening and how to configure the capture so that when I save my image to the server to later display in a web view, it is not rotated 90 degrees CCW?
This happens because the orientation set in the metadata of the new image is being affected by the orientation of the AV system that creates it. The layout of the actual image data is, of course, different from the orientation mentioned in your metadata. Some image viewing programs respect the metadata orientation, some ignore it.
You can affect the metadata orientation of the AV system by calling:
AVCaptureConnection *videoConnection = ...;
if ([videoConnection isVideoOrientationSupported])
[videoConnection setVideoOrientation:AVCaptureVideoOrientationSomething];
You can affect the metadata orientation of a UIImage by calling:
UIImage *rotatedImage = [[UIImage alloc] initWithCGImage:image.CGImage scale:1.0f orientation:UIImageOrientationSomething];
But the actual data from the AVCapture system will always appear with the wider dimension as X and the narrower dimension as Y, and will appear to be oriented in LandscapeLeft.
If you want the actual data to line up with what your metadata claims, you need to modify the actual data. You can do this by writing the image out to a new image using CGContexts and AffineTransforms. Or there is an easier workaround. Use the UIImage+Resize package as discussed here. And resize the image to it's current size by calling:
UIImage *rotatedImage = [image resizedImage:CGSizeMake(image.size.width, image.size.height) interpolationQuality:kCGInterpolationDefault];
This will rectify the data's orientation as a side effect.
If you don't want to include the whole UIImage+Resize thing you can check out it's code and strip out the parts where the data is transformed.
I am trying to get the pixel colour from an image displayed by the webcam. I want to see how the pixel colour is changing with time.
My current solution sucks a LOT of CPU, it works and gives me the correct answer, but I am not 100% sure if I am doing this correctly or I could cut some steps out.
- (IBAction)addFrame:(id)sender
{
// Get the most recent frame
// This must be done in a #synchronized block because the delegate method that sets the most recent frame is not called on the main thread
CVImageBufferRef imageBuffer;
#synchronized (self) {
imageBuffer = CVBufferRetain(mCurrentImageBuffer);
}
if (imageBuffer) {
// Create an NSImage and add it to the movie
// I think I can remove some steps here, but not sure where.
NSCIImageRep *imageRep = [NSCIImageRep imageRepWithCIImage:[CIImage imageWithCVImageBuffer:imageBuffer]];
NSSize n = {320,160 };
//NSImage *image = [[[NSImage alloc] initWithSize:[imageRep size]] autorelease];
NSImage *image = [[[NSImage alloc] initWithSize:n] autorelease];
[image addRepresentation:imageRep];
CVBufferRelease(imageBuffer);
NSBitmapImageRep* raw_img = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSLog(#"image width is %f", [image size].width);
NSColor* color = [raw_img colorAtX:1279 y:120];
float colourValue = [color greenComponent]+ [color redComponent]+ [color blueComponent];
[graphView setXY:10 andY:200*colourValue/3];
NSLog(#"%0.3f", colourValue);
Any help is appreciated and I am happy to try other ideas.
Thanks guys.
There are a couple of ways that this could be made more efficient. Take a look at the imageFromSampleBuffer: method in this Tech Q&A, which presents a cleaner way of getting from a CVImageBufferRef to an image (the sample uses a UIImage, but it's practically identical for an NSImage).
You can also pull the pixel values straight out of the CVImageBufferRef without any conversion. Once you have the base address of the buffer, you an calculate the offset of any pixel and just read the values from there.
I use the same big images in a tableView and detailView.
Need to make imageView filled in 40x40 when an imags is showed in tableView, but stretched on a half of a screen. I played with several properties but have no positive result:
[cell.imageView setBounds:CGRectMake(0, 0, 50, 50)];
[cell.imageView setClipsToBounds:NO];
[cell.imageView setFrame:CGRectMake(0, 0, 50, 50)];
[cell.imageView setContentMode:UIViewContentModeScaleAspectFill];
I am using SDK 3.0 with build in "Cell Objects in Predefined Styles".
I put Ben's code as an extension in my NS-Extensions file so that I can tell any image to make a thumbnail of itself, as in:
UIImage *bigImage = [UIImage imageNamed:#"yourImage.png"];
UIImage *thumb = [bigImage makeThumbnailOfSize:CGSizeMake(50,50)];
Here is .h file:
#interface UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size;
#end
and then in the NS-Extensions.m file:
#implementation UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size
{
UIGraphicsBeginImageContextWithOptions(size, NO, UIScreen.mainScreen.scale);
// draw scaled image into thumbnail context
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
return newThumbnail;
}
#end
I cache a thumbnail version since using large images scaled down on the fly uses too much memory.
Here's my thumbnail code:
- (UIImage *)thumbnailOfSize:(CGSize)size {
if( self.previewThumbnail )
return self.previewThumbnail; // returned cached thumbnail
UIGraphicsBeginImageContext(size);
// draw scaled image into thumbnail context
[self.preview drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
self.previewThumbnail = newThumbnail;
return self.previewThumbnail;
}
Just make sure you properly clear the cached thumbnail if you change your original image (self.preview in my case).
I have mine wrapped in a UIView and use this code:
imageView.contentMode = UIViewContentModeScaleAspectFit;
imageView.autoresizingMask = UIViewAutoresizingFlexibleWidth |UIViewAutoresizingFlexibleHeight;
[self addSubview:imageView];
imageView.frame = self.bounds;
(self is the wrapper UIView, with the dimensions I want - I use AsyncImageView).
I thought Ben Lachman's suggestion of generating thumbnails in advance rather than on the fly was smart, so I adapted his code so it could handle a whole array and to make it more portable (no hard-coded property names).
- (NSArray *)arrayOfThumbnailsOfSize:(CGSize)size fromArray:(NSArray*)original {
NSMutableArray *temp = [NSMutableArray arrayWithCapacity:[original count]];
for(UIImage *image in original){
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0,0,size.width,size.height)];
UIImage *thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp addObject:thumb];
}
return [NSArray arrayWithArray:temp];
}
you might be able to use this?
yourTableViewController.rowImage = [UIImage imageNamed:#"yourImage.png"];
and/or
cell.image = yourTableViewController.rowImage;
and if your images are already 40x40 then you shouldn't have to worry about setting bounds and stuff... but, i'm also new to this, so, i wouldn't know, haven't played around with Table View row/cell images much
hope this helps.
I was able to make this work using interface builder and a tableviewcell. You can set the "Mode" properties for an image view to "Aspect Fit". I'm not sure how to do this programatically.
Try setting UIImageView.autoresizesSubviews and/or UIImageView.contentStretch.