Coloring a CCSprite without only using its prime colors - objective-c

I have a CCSprite that was created from a png with transparent background.
I want to be able to apply colors to this sprite in a way that I`m free to define which color it is, without the actual color of the sprite affecting the amount of each color I have to add.
I`ve tried this:
mySprite.color = ccc3(200,200,255);
In an attempt to add a little blue-ish feel to my sprite, but as it works by setting the amount of tint that's gonna be displayed based on existant color of the sprite, and my sprite has virtually no blue in any of it (most of it is yellow) the resulting effect is pretty sketchy, everything gets really dark, and there is one slight blue-ish coloring, but not as I wanted.
The ideal effect for me on this case would be to ADD a light blue mask to it with very low alpha.
Is there an easy way to do that without composing sprites?
I've tried using CCTexture2D, but had no luck, as there is no built in method for working with colors, and most tutorials only teach you how to build textures out of image files.

This is deceptively hard to do in code with the original sprite. Another option would be:
create a new sprite, which is just a white outline version of your original sprite
the color property of this white sprite will now respond exactly to the RGB values you pass in
so pass in your light blue value to the white sprite and set the opacity correctly
then overlay it on your original sprite
Any good?

The only way you can achieve this is by overlaying (masking) the sprite with the proper OpenGL blend functions. When you say "add a light blue mask" then that's what you need to do. You may find this visual blendfunc tool helpful, if only to understand how blending with mask sprites works and what you can achieve with it. There's no built-in support for this by Cocos2D however, so you'll have to revert to pure OpenGL inside a cocos2d node's -(void) draw {} method.
Tinting (changing the color property) will only modify the RGB channels of the entire image by changing the vertex colors of all 4 vertices.

Related

Recreate Apple Watch fitness tracker ‘progress’ bar - gradient on CAShapeLayer stroke

I'm writing an app that could make good use of the Apple Watch's fitness tracker design, here:
So far, I've created the basic outline which is just a CAShapeLayer with a CGPath of an ellipse. I use strokeStart and strokeEnd to animate the progress. My problem comes when applying a gradient to the outline. How do I apply a gradient like above to the stroke of a CGPath?
The cleanest way to do this without having to drop down to Core Graphics or GL is to create a layer containing the angle gradient that you want the ring filled with, mask it with a CAShapeLayer containing your circular path (with the appropriate line width and cap settings), then, as you’re currently doing, use the shape layer’s strokeEnd property to set the “fill” percentage. Note that there isn’t a built-in way to create an angle gradient—you can use one of the suggestions in this answer for that.
edit: Also, you’ll need a pair of semicircular “cap” images, one at each end of the ring—as the fill percentage gets close to 100%, the region at the top will reveal the discontinuity between the start and end color. In your example image above, you’d need a red semicircle oriented like this ( at the start, and a pink one oriented like this ) with a translation/rotation transform tracking the end.
additional edit: Also also, since the end-cap semicircle will be moving along the gradient, you’ll need it to change color, interpolating from the start color to the end color as the fill amount goes from 0% to 100%. Best way to do that is with a shape layer with a semicircular path, since you can set the fillColor of that without having to redraw image contents.
We did this for an iOS app.. but quickly stopped as it gets bogged down quickly.
I think Apple is using images.. as they do in the Lister example

How to make a PhysicsBody based on Alpha Values

Suppose there there is a scene as follows:
There is a scene with the same size as the frame of the device. The scene has a red ball, which is able to move throughout the 'world'. This world is defined by black and white areas, where the ball is ONLY able to move in the area that is white. Here is a picture to help explain:
Parts of the black area can be erased, as if the user is drawing with white color over the scene. This would mean that the area in which the ball can be moved is constantly changing. Now, how would one go about implementing a physicsBody for the an edge between the white and black areas?
I tried redefining the physicsBody every time it is changed, but once the shape becomes complex enough, this isn't a viable solution at all. I tried creating a two-dimensional array of 'boxes' that are invisible and specify whether most of the area within each box is white or black, and if the ball touched a box that was black, it would be pushed back. However, this required heavy rendering and iterating over the array too much. Since my original array contained boxes a little bigger than a pixel, I tried making these boxes bigger to smooth the motion a little, but this eventually caused part of the ball to be stopped by white areas and appear to be inside the black area. This was undesired, since the user could feel invisible barriers that they seemed to be hitting.
I tried searching for other methods to implement this 'destructible terrain' type scene, but the solutions that I found and tried were using other game engines. To further clarify, I am using Objective-C and Apple's SpriteKit framework; and I am not looking for a detailed class full of code, but rather some pseudo-code or implementation ideas that would lead me to a solution.
Thank you.
If your deployment target is iOS 8, this may be what you're looking for...
+ bodyWithTexture:alphaThreshold:size:
Here's a description from Apple's documentation
Creates a physics body from the contents of a texture. Only texels
that exceed a certain transparency value are included in the physics
body.
where a texel is a texture element. You will need to convert an image to the texture before creating the SKPhysicsBody.
I'm not sure if it will allow for a hole in the middle like your drawing. If not, I suspect you can connect two physics bodies, a left half and a right half, to form the hole.

OpenGL ES blend func so color always shows against background

I am using OpenGL ES 1.1 to draw lines in my iPad app. I want to make sure that the drawn lines are always visible on the screen regardless of the background colors, and without allowing the user to choose a color. Is there a blend function that will create this effect? So the color of the line drawn will change based on the colors already drawn beneath it and therefore always be visible.
Sadly the final blending of fragments into the framebuffer is still fixed function. Furthermore glLogicOp isn't implemented in ES so you can't do something cheap like XOR drawing.
I think the net effect is that:
you want the output colour to be a custom function of the colour already in the frame buffer;
but the frame buffer can't be read in a shader (it'd break the pipeline and lead towards concurrency issues).
You're therefore going to have to implement a ping pong pipeline.
You have two off-screen buffers. One represents what you output last frame, the other represents what you output the frame before that.
To generate a new frame you render using the one that represents the frame before as an input. Because it's an input you can sample it wherever you want and make whatever calculations you like on it. You render to the other buffer that you have (ie, the even older one) because you no longer care about its contents.
Then you copy all that to the screen and swap the two over, meaning that what you just drew is still in a texture to refer to as what you drew last frame. What you just referred to becomes your next drawing target because it's something you conveniently already have lying around.
So you'll be immediately interested in rendering to a texture. You'll also need to decide what function you want to use to pick a suitable 'different' colour to the existing background. Maybe just inverting it will do?
I think this could work:
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ZERO);
Draw your lines with a white color, and then the result will be rendered as
[1,1,1,1] * ( 1 - [DstR, DstG, DstB, DstA]) + ([DstR, DstG, DstB, DstA] * 0)
This should render a black pixel where the background is white, a white pixel where the background is black, a yellow pixel where the background is blue, etc.

How to add a shadow to an UIImageView which fits the shape of the image content but with some rotation and shift effect

I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.

App graphic making (transparent and no extra spaces)

I am a coder but not a graphic maker. I can decently produce graphics that meet the quality standards visually although I cannot produce graphics that will technically "work." This is what I mean:
I am using CGRectIntersectsRect for colliding images. My image has SOME extra space which I have made completely transparent using Adobe PhotoShop but even if this extra transparent space is not visible, when the two images collide, it will look like you will be hitting nothing as this extra invisible transparent space is PART of the image and when CGRectIntersectsRect is called it detects touch between two images. So if the other image touches the transparent space, CGRectIntersectsRect is called and my code is executed. I only want my code to be executed if it hits the actual COLOR space of the image. Here is two things that could help me through that, they follow through with questions.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
Thank you guys!
Regarding your question #1, it depends. All images are rectangular, all. So, if your sprite is rectangular, you can crop it in Photoshop to just the rectangular area. But if you want to handle, say, a circle ball, then you can't do such thing as "remove extra space". Your circle ball will always be stored in a rectangular image, with transparent space on the corners.
Learn how to make NO EXTRA SPACE on an image in photoshop. How could I do this, tutorials?
You can manually select an area using the Rectangular Marquee Tool and Image > Crop or automatically trim the image based on an edge pixel color using Image > Trim.
CGRectIntersectsRect only called when touching a color part of an image. A way to do this?
You can use pixel-perfect collisions or create better bounding shapes for your game objects. For example, instead of using pixel-perfect collision for a spaceship like this one, you could use a triangle for the wings, a rectangle for the body, and a triangle for the head.
Pixel-perfect collision
One way you could implement it would be to
Have an blank image in memory.
Draw visible pixels from one image in blue (#0000ff).
Draw visible pixels from the other image in red (#ff0000).
If there's any purple pixels in the image (#ff00ff), then there's an intersection.
Alternative collision detection solution
If your game is physics-based, then you can use a physics engine like Box2D. You can use circles, rectangles, and polygons to represent all of your game objects and it'll give you accurate results without unnecessary overhead.
For collision detection for non-rectangular shapes, you should look into one of the many game and/or physics libraries available for iOS. Cocos2d coupled with Box2d or chipmunk are popular choices.
If you want to do it yourself, you'll need to start with something like a custom CGPath tracing the actual shape of each object, then use a function like CGPathContainsPoint (that's from memory, it may be wrong). But it is not a simple job. Angry birds uses box2d, AFAIK.