NSImage from two NSImages - objective-c

I have a rectangular NSImage A and I want to scale to embed into a squared transparent image B keeping A's ratio. So, in the end I'll get a squared image with the rectangle in it.
How can I compose that image?. I mean, how can I draw an NSImage over another NSImage and save the resulting image?.
I've been reading about clipping an NSImage inside a beizer but I need to keep ratio instead of filling the beizer square.
I hope you understand what I want.
Thanks.

The 'Cocoa Drawing Guide' has a section called 'Drawing to an Image'. From that documentation:
It is possible to create images programmatically by locking focus on an NSImage object and drawing other images or paths into the image context. This technique is most useful for creating images that you intend to render to the screen, although you can also save the resulting image data to a file.
There is example code there.

Related

Saving what is currently drawn into a view as an image

I am creating a drawing app and have run into a problem. I have an array of curves; each curve keeps an array of points, and each point keeps its color, thickness, and coords.
When I drawRect: is called, I redraw all the curves from this array. The problem is that this array is getting huge, and the app slows down.
My idea is to, at the end of each redrawing, save the current context as an image, free the curves array, and at the next redraw, use that image as the background. Ultimately, I don't need the curves array at all, just an array of the curves in progress. Is this possible? Or maybe there is another way to do it?
You can render the corresponding layer of your view as image to update in on the next iteration. Sure it is better in this case to use UIImageView as yourViewToSaveAsImage. In this case you could get this process even easier...
UIView *view = yourViewToSaveAsImage;
UIGraphicsBeginImageContext(view.bounds.size);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
A path contains only information on points, so if you want to track variations in rendering you need a separate list of objects.
I achieved this by creating an NSArray* of my own custom objects that included fields such as: an NSBezierPath* (to capture the points and simplify drawing the segment), a CGPathDrawingMode to use for the segment, and information on the color and line size.
Then when I draw, I iterate over the elements of the array, set the context's current colors, and call either stroke or fill on the current element's NSBezierPath* depending on how I configured that segment.
I would also like to know if there's a faster way but this approach certainly works well.

Images in NSButton and NSImageView Blurred

I am completely stumped here; I have a series of small images I'm tinkering with and making into buttons:
And as you can see they are all decently crisp and sharp, and retain this when I open the png files in Preview and what not.
However, when I use them in NSButtons and NSImageViews in Interface Builder, setting Scaling to None:
The images become horribly blurred. What am I doing wrong? I don't know where to start and what to try; should I go back to the icons and try to make them pixel perfect? Does it have to do with anti-aliasing or something along those lines?
EDIT:
For some reason, it seems as if the NSButtons and NSImageViews are loading the high resolution versions of the images, even though I'm on a normal display, which can be identified by a slight light blue stroke I added to them. For some reason, Quartz Debug does not identify these as high resolution images and there's no red tint. Removing references to the #2x images does fix the problem... but...
If you check out session 245 in the WWDC 2012 videos Advanced Tips and Tricks for High Resolution on OS X in the first section on NSImage you'll find out why.
NSImage doesn't have any concept of high resolution - it just uses the smallest image that has more pixels than the space it has to fill - so if your NSImageView is bigger in dimension than your 1x image it will use the 2x image as it has more pixels.
I have this problem before. It seems that if your image's DPI isn't 72, the image size will be wrong. You can get the real size use the code below.
NSImage *image = [NSImage imageNamed:#"image"];
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[image TIFFRepresentation]];
NSSize size = NSMakeSize([rep pixelsWide], [rep pixelsHigh]);
[image setSize: size];
When specifying image names in Interface Builder and [NSImage imageNamed:], make sure to use foo instead of foo.png. While iOS is smart enough to add the #2x in the later case, Mac OS X is not. It will load the non-retina image in the later case, but will add the #2x in the first case (if such an image is present).
Are you assigning the images to your Buttons in IB or in Code?
If you are doing it in code, maybe creating a copy of the image (e.g. [myImage copy]), and assigning that copy to your button may solve this.
In my case (drawing icons in custom NSOutlineView), I had to make sure that the x,y origin of the drawRect is rounded to int values:
NSMakeRect( round(NSMinX(cellFrame)-iconSize.width),
round(NSMidY(cellFrame)-(iconSize.height/2.0f)), …);
This is actually a response to the earlier post about DPI, but I was unable to reply directly to it. The code in that post gave the true pixel dimensions for me (that is, it did not indicate any trouble). However, image DPI was definitely the culprit in my case. The symptoms I was seeing were:
With my NSImageViews set to No Scaling, the images would appear squashed.
With my NSImageViews set to Axes Independently, most images would appear correctly if the dimensions of the NSImageViews were set to exactly match the dimensions of the image.
However, even in this case, some images had strange artifacts in them that were not there when viewing the same image via Preview or elsewhere (or even via Interface Builder, for that matter -- they only appeared at runtime).
The images that had trouble were at a DPI other than 72. When I re-created the images at 72 DPI, all of the above behavior disappeared.
This was a pretty confounding issue -- I hope this helps someone!
For me, I just needed to set image scaling to none:
In Interface Builder
In code
NSImageCell *image;
[image setImageScaling:NSImageScaleNone];
NSButtonCell *button;
[button setImageScaling:NSImageScaleNone];

Create an irregular shaped frame

I've created a canvas within which I display an image that is clipped when it goes over the edges. I can do this fine with a square shaped frame, however the frame I want to use is the one below. Is there any way I can clip the image inside the frame without having to add a non transparent square border around the image, i.e. just using the black line that I've already drawn? (on iPad)
You'll need to use Core Graphics and Quartz to handle this sort of clipping/graphics manipulation.
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/Introduction/Introduction.html#//apple_ref/doc/uid/TP30001066
If you're using UIBezierPath, you may be able to achieve the clipping you're after using the following process
http://developer.apple.com/library/ios/#documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_paths/dq_paths.html#//apple_ref/doc/uid/TP30001066-CH211-TPXREF101
Convert your UIBezierPath to a CGPath
Get your image into a CGContext
Add your CGPath to the context via CGContextAddPath
Clip your context using CGContextClip
Alternatively, if you don't want to be messing with paths (and depending on whether this technique is suitable for your situation, your description of the issue makes it hard to tell), it might be worth using image masking to achieve the effect you're after. See the first link and look under "Bitmap Images and Image Masks".

How to add a shadow to an UIImageView which fits the shape of the image content but with some rotation and shift effect

I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.

Replace all white/nearly white pixels in a UIImage with alpha using CGImage?

I have a UIImage with white background. I would like replace the white background/pixels with alpha-transparent pixels. I've looked at other questions on StackOverflow, along with Quartz documentation, but have yet to find a coherent "start-to-end" for this problem. How is this done?
CGImageCreateWithMaskingColors
A UIImage wraps a CGImage. Take the CGImage, run it through CGImageCreateWithMaskingColors, then either create a new UIImage from the result or assign the result back to the UIImage.
The first step is you need to define some sort of "distance" function to determine how far away a pixel is from being white. Then you need to define a distance threshold below which a pixel is considered white. Then you would need to iterate over the pixels of the image, changing any pixels that were considered white according to your distance and threshold, to being transparent. The main trick, though, is making this efficient... touching pixels through functions will be very slow; your best bet is to touch the pixels directl by gaining access to the memory buffer in which the pixels reside and stepping through them.