Is there any way to check an NSImage's color type in OBJC? I wanna know whether the image is GrayScale or RGB. Any suggestions would help, thanks~
If you can get the image's color space (possibly from its underlying CGImage) you could determine the number of components in it. If its < 3 you have a monochrome image.
However an image could still be monochrome and have a color color space.
Related
I have set a patterned background on my UIView using:
myView.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"backgroundImage.png"]];
But the image appears to be stretched or scaled up and doesn't appear at the desired resolution. Is there a way to set the size or scale of a background pattern? Or is the images size used as a default. Does the image DPI have an affect?
The pattern is constructed by tiling the image until it fills the given area.
So there is no control on tile size other than the original image dimensions.
Now, if you want to provide retina images you should just have a #2x version and iOS will take care of that automatically (btw change the method call to [UIImage imageNamed:#"backgroundImage"] - the file extension is optional for png images).
Do not provide higher dpi images for retina, instead provide an image that is twice the size of the non-retina one (and obviously not by oversampling the image).
Finally the only control you seem to have on the pattern (at least the only one that is documented) is the phase. Here is the relevant part from the official documentation:
By default, the phase of the returned color is 0, which causes the top-left corner of the image to be aligned with the drawing origin. To change the phase, make the color the current color and then use the CGContextSetPatternPhase function to change the phase.
Turns out I wasn't using the #2x naming convention so images were appearing stretched. I added it in and it fixed everything.
What would be the most efficient way to remap the colors of an image to a gradient for iOS? This is defined as "apply a color lookup table to the image" in the Image Magic docs, and generally I think. Is there something built in core image for instance to do this? I know it can be done with ImageMagick code using convert -clut, but not certain that is the most efficient way to do it.
the result of remapping the image to a gradient is as pictured here:
http://owolf.net/uploads/ny.jpg
The basic formula, copied from fraxel's comment is:
1.Open your image as grayscale, and RGB
2.Convert the RGB image to HSV (Hue, Saturation, Value/Brightness) color space. This is a cylindrical space, with hue represented by a single value on the polar axis.
3.Set the hue channel to the grayscale image we already opened, this is the crucial step.
4.Set value, and saturation channels both to maximal values.
5.Convert back to RGB space (otherwise display will be incorrect).
I have a rectangular NSImage A and I want to scale to embed into a squared transparent image B keeping A's ratio. So, in the end I'll get a squared image with the rectangle in it.
How can I compose that image?. I mean, how can I draw an NSImage over another NSImage and save the resulting image?.
I've been reading about clipping an NSImage inside a beizer but I need to keep ratio instead of filling the beizer square.
I hope you understand what I want.
Thanks.
The 'Cocoa Drawing Guide' has a section called 'Drawing to an Image'. From that documentation:
It is possible to create images programmatically by locking focus on an NSImage object and drawing other images or paths into the image context. This technique is most useful for creating images that you intend to render to the screen, although you can also save the resulting image data to a file.
There is example code there.
I am writing a Cocoa application for mac osx. I'm trying to figure out how to determine the size of an image that will be captured by a camera? I would like to know the size of the image that will be captured so I can setup a view with an aspect ratio that won't distort the image. For example, if my view is defined to be 640x360 and my camera captures images that are 640x480, the displayed image looks short and fat. I'm also displaying some other layers over the image and I need the image size to be able to scale and position the layers properly.
I won't know the type of camera that is attached until run-time so I'd like to be able to interrogate the device and get attributes like image size. Thanks for the help...
You are altering the aspect ratio of the image when you capture in 640x360 instead of 640x480 or 320x240. You are doing something similar as a resize, using the whole image and making it a different size.
If you don't want to distort the image, but use only a portion of it you need to do a crop. Some hardware support cropping, others don't and you have to do it in software. Cropping is using only portions of the original image. In your case, you would discard the bottom 120 lines.
Example (from here):
The blue rectangle is the natural, or original image and the red is a crop of it.
I have a CGImageRef to a shape with a transparent background. Is is possible to stroke the image, like you can stroke a path with CGContextStrokePath()? Alternatively, can you convert the image to a path and stroke that?
You can use CGContextDrawTiledImage. It's not exactly what you are looking for, but as far as I know there is no built-in way to do what you want. You can use a combination of well-placed rects and clipping paths to achieve the look you are going for, however.
It turns out there's no easy way to do this, so I ended up having to iterate over each pixel (you can get a pointer to the image data using CGBitmapContextGetData()) and adjusting the RGB value of every one that was adjacent to an opaque pixel.