Change a color in a UIImage - objective-c

I will like to know how I can change just one color in a image.
Like saying: if the color in this pixel is "red" change it to "blue".

The technical approach is straightforward:
get all pixel values (explained here)
Look for the pixel values you don't like and change them
Draw the image using the changed pixel values (explained here)
Keep in mind, if you mix the three steps into one method, without creating UIColor objects but changing them and immediately afterwards drawing the changed image, you'll get much better performance.

Related

CreateJS: how to change color of bitmap image

I'm using createJS to build some easy game.
I have an image (white fill and black stroke) and I would change the black color to another.
Is it possible?
Thanks
The three ways to do color adjustments in EaselJS are:
Composite Operations: You can draw an image using a composite operation (such as "destination-in") to determine how pixels are laid down. This is probably not going to give you the result you want. Here is an example of a black PNG being changed to different colors using compositeOperation.
Color Filters. EaselJS has both a ColorFilter and a ColorMatrixFilter, which assist with modifying colors. The first uses parameters to multiply and add to the color and alpha channels, but is a little harder to use. The second uses a ColorMatrix to adjust hue, saturation, contrast, and brightness. This may not work for you, since changing the black pixels is kind of the opposite of color filters do.
A Custom Filter. EaselJS supports custom filters (such as the Threshold Filter in the extras folder. This is probably your best option, and might take massaging to get what you need.
Hope that sheds some light.

Recreate Apple Watch fitness tracker ‘progress’ bar - gradient on CAShapeLayer stroke

I'm writing an app that could make good use of the Apple Watch's fitness tracker design, here:
So far, I've created the basic outline which is just a CAShapeLayer with a CGPath of an ellipse. I use strokeStart and strokeEnd to animate the progress. My problem comes when applying a gradient to the outline. How do I apply a gradient like above to the stroke of a CGPath?
The cleanest way to do this without having to drop down to Core Graphics or GL is to create a layer containing the angle gradient that you want the ring filled with, mask it with a CAShapeLayer containing your circular path (with the appropriate line width and cap settings), then, as you’re currently doing, use the shape layer’s strokeEnd property to set the “fill” percentage. Note that there isn’t a built-in way to create an angle gradient—you can use one of the suggestions in this answer for that.
edit: Also, you’ll need a pair of semicircular “cap” images, one at each end of the ring—as the fill percentage gets close to 100%, the region at the top will reveal the discontinuity between the start and end color. In your example image above, you’d need a red semicircle oriented like this ( at the start, and a pink one oriented like this ) with a translation/rotation transform tracking the end.
additional edit: Also also, since the end-cap semicircle will be moving along the gradient, you’ll need it to change color, interpolating from the start color to the end color as the fill amount goes from 0% to 100%. Best way to do that is with a shape layer with a semicircular path, since you can set the fillColor of that without having to redraw image contents.
We did this for an iOS app.. but quickly stopped as it gets bogged down quickly.
I think Apple is using images.. as they do in the Lister example

OpenGL ES blend func so color always shows against background

I am using OpenGL ES 1.1 to draw lines in my iPad app. I want to make sure that the drawn lines are always visible on the screen regardless of the background colors, and without allowing the user to choose a color. Is there a blend function that will create this effect? So the color of the line drawn will change based on the colors already drawn beneath it and therefore always be visible.
Sadly the final blending of fragments into the framebuffer is still fixed function. Furthermore glLogicOp isn't implemented in ES so you can't do something cheap like XOR drawing.
I think the net effect is that:
you want the output colour to be a custom function of the colour already in the frame buffer;
but the frame buffer can't be read in a shader (it'd break the pipeline and lead towards concurrency issues).
You're therefore going to have to implement a ping pong pipeline.
You have two off-screen buffers. One represents what you output last frame, the other represents what you output the frame before that.
To generate a new frame you render using the one that represents the frame before as an input. Because it's an input you can sample it wherever you want and make whatever calculations you like on it. You render to the other buffer that you have (ie, the even older one) because you no longer care about its contents.
Then you copy all that to the screen and swap the two over, meaning that what you just drew is still in a texture to refer to as what you drew last frame. What you just referred to becomes your next drawing target because it's something you conveniently already have lying around.
So you'll be immediately interested in rendering to a texture. You'll also need to decide what function you want to use to pick a suitable 'different' colour to the existing background. Maybe just inverting it will do?
I think this could work:
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ZERO);
Draw your lines with a white color, and then the result will be rendered as
[1,1,1,1] * ( 1 - [DstR, DstG, DstB, DstA]) + ([DstR, DstG, DstB, DstA] * 0)
This should render a black pixel where the background is white, a white pixel where the background is black, a yellow pixel where the background is blue, etc.

Colors in app different than the image they're based on

I'm creating a coloring book app for a client and it's coming along well. Very well, in fact, because it's finished. However there's one snag: some of the colors aren't displaying as expected.
The client gave me images and a key to use for the brush creation. I took the color reference and made several different sizes of circle to use, for each color, to represent different brush sizes. I then load the brush as so:
brush = [[CCSprite spriteWithFile:#"yellowbrush3.png"] retain];
[brush setBlendFunc: (ccBlendFunc) { GL_ONE, GL_ONE_MINUS_SRC_ALPHA }];
[brush setOpacity:20];
The brush image for this particular file is:
I made a screenshot of the colors output to compare with the key I used to create the brushes:
About half of the colors show up just fine while the others are pretty noticeable.
I've tried different levels of opacity, changing some GL settings, but nothing seems to help.
This is due to a blending artifact and the fact that you are drawing on a white background. To confirm this, alter the background color and see that the hue for each color changes slightly. To correct this issue you can reduce the opacity of the painted colors, or pick a more appropriate blending mode.
Try changing your glBlendFunc to use GL_ONE for both parameters. This will remove the blending but at least your colors should be 100% accurate.

How to add a shadow to an UIImageView which fits the shape of the image content but with some rotation and shift effect

I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.