drawInRect loses resolution when drawn to smaller image? - objective-c

When i draw a large image (say 1000x1000 pixel) using drawInRect method with size say 200x200 pixel and again i use drawInRect method to draw the image to its original size (1000x1000 pixel) does the resolution affect by using this ? Does the resolution decreases by drawing large image into small and again that same image to large image ?

Hopefully I've gotten your question correct in my head.
If you take an image bigger than 200x200 pixels and draw it into a 200x200 pixel rectangle, it'll get scaled down and lose most of its detail. If you then take the resultant image and try to draw it in a bigger rectangle it'll just get scaled up. So, to answer you're question, yes. It'll look blurry as hell. It's no different than resizing an image down in a graphics editor then blowing it back up to its original size. The loss of detail is permanent; there's no way to know what was lost in the transition down.

Related

when i resize big tiff image to less than 25% ( say 17% ), the pixels are wrapped from left to right

im using image magick v7.16.0. i have a very big image of size 440*1700. i want this to be resized to 17% of the original Size.
I tried with Resize, Scale and Rescale methods with intended width/height parameters and also with MagicGeometry.
But the resized image wraps the pixels from left side to right side.Click here for the image
Can any one help me in understanding why this wrapping is seen ?
This wrapping is seen only when i resize to less than 25%. [ when i resize to 50% or 80% i dont see this wrapping ]

Drawing Retina resolution on CGDisplayGetDrawingContext

I have a situation were I'm trying to draw an image into a display's CGContext retrieved using CGDisplayGetDrawingContext. Despite having the image at the correct high resolution, using CGContextDrawImage to draw the image onto the context results in a pixilated image. I've also tried scaling down the image in a bitmap context (using CGBitmapContextCreate) then drawing that one onto the display's context, however that also results in a pixilated image (I thought it may retain the DIP, was a long shot). Any idea how to fix this?.

iOS objective C converting coordinates from Absolute image location to my views coordinate system

I am using CIDetector to find faces in a picture.
The coordinates of faces it returns are the absolute coordinates in the image file (The image dimensions are much larger than the screen size obviously).
I tried to use the converRect:toView command. The image itself is not a UIView so the command doesn't work, also I have a few views embedded inside each other where finally the image is being shown.
I want to convert the bounds of the found faces in the image to the exact location of the face being shown on the screen in the embedded image.
How can this be accomplished?
Thanks!
The image being shown on the phone - the image is scaled to fit the screen with aspect fit
The coordinates from CIDetecter (CoreImage) are flipped relative to UIKit coordinates. There are a bunch of tutorials out there on iOS Face Detection but most of them are either incomplete or mess up the coordinates. Here's one that is correct: http://nacho4d-nacho4d.blogspot.com/2012/03/coreimage-and-uikit-coordinates.html
One thing to note: the tutorial uses a small image so the resulting coordinates do not have to be scaled to the on-screen (UIImageView) representation of the image. Assuming you use a photo taken with the iPad camera, you will have to scale the coordinates by the amount the source image is scaled (unless you reduce its size before running the face detection routine -maybe not a bad idea). You may also need to rotate the image for the correct orientation.
There is a routine in one of the answers here for rotating/scaling: UIImagePickerController camera preview is portrait in landscape app
And this answer has a good routine for finding the scale of an image when presented by a UIImageView using 'aspect fit': How to get the size of a scaled UIImage in UIImageView?
You will need to use the scale in order to map the CIDetector coordinates from the full size image to the scaled down image shown in a UIImageView.

Scaling an image from the iPhone - iOS App

I let the user select a photo from the iPhone library, and I grab the UIImage.
I output the size of the image, and it says 320x480, but it doesn't seem to be, because when I draw the image on the screen using CGRectMake(0,0,320,480), it only shows the upper left portion of the image. Aren't the images much bigger than 320x480 because of the high resolution?
I'd like to scale the image to force it to be 320x480. If it is less than 320x480, it should not be rescaled at all. If the width is greater than 320 or the height is greater than 480, it should scale in a way so that it becomes as close to 320x480 as possible, but by keeping the proper proportion of width to height. So, for instance, if it scales to 320x420, that is fine, or 280x480.
How can I do this in Objective-C?
Setting the image view's content mode like this:
myView.contentMode = UIViewContentModeScaleAspectFit;
will preserve the aspect ratio.

Replace all white/nearly white pixels in a UIImage with alpha using CGImage?

I have a UIImage with white background. I would like replace the white background/pixels with alpha-transparent pixels. I've looked at other questions on StackOverflow, along with Quartz documentation, but have yet to find a coherent "start-to-end" for this problem. How is this done?
CGImageCreateWithMaskingColors
A UIImage wraps a CGImage. Take the CGImage, run it through CGImageCreateWithMaskingColors, then either create a new UIImage from the result or assign the result back to the UIImage.
The first step is you need to define some sort of "distance" function to determine how far away a pixel is from being white. Then you need to define a distance threshold below which a pixel is considered white. Then you would need to iterate over the pixels of the image, changing any pixels that were considered white according to your distance and threshold, to being transparent. The main trick, though, is making this efficient... touching pixels through functions will be very slow; your best bet is to touch the pixels directl by gaining access to the memory buffer in which the pixels reside and stepping through them.