I'm writing an app that could make good use of the Apple Watch's fitness tracker design, here:
So far, I've created the basic outline which is just a CAShapeLayer with a CGPath of an ellipse. I use strokeStart and strokeEnd to animate the progress. My problem comes when applying a gradient to the outline. How do I apply a gradient like above to the stroke of a CGPath?
The cleanest way to do this without having to drop down to Core Graphics or GL is to create a layer containing the angle gradient that you want the ring filled with, mask it with a CAShapeLayer containing your circular path (with the appropriate line width and cap settings), then, as you’re currently doing, use the shape layer’s strokeEnd property to set the “fill” percentage. Note that there isn’t a built-in way to create an angle gradient—you can use one of the suggestions in this answer for that.
edit: Also, you’ll need a pair of semicircular “cap” images, one at each end of the ring—as the fill percentage gets close to 100%, the region at the top will reveal the discontinuity between the start and end color. In your example image above, you’d need a red semicircle oriented like this ( at the start, and a pink one oriented like this ) with a translation/rotation transform tracking the end.
additional edit: Also also, since the end-cap semicircle will be moving along the gradient, you’ll need it to change color, interpolating from the start color to the end color as the fill amount goes from 0% to 100%. Best way to do that is with a shape layer with a semicircular path, since you can set the fillColor of that without having to redraw image contents.
We did this for an iOS app.. but quickly stopped as it gets bogged down quickly.
I think Apple is using images.. as they do in the Lister example
Related
I am developing a Mac app, I want to draw a view like a radar, I find no method to draw gradient color along the arc. The existing method only draw gradient towards one direction.
What you want is an angle gradient. I've written a Core Image filter that generates an angle gradient; give it an opaque color (e.g., green) for the start color and the completely-transparent version of that color for the middle and end colors.
(The filter's output is actually infinite in extent and centered at the origin, so you'll need to mask it out to a circle and use an affine transform at one level or another to get it into the right position.)
Extra credit: In the kernel code for that filter (near the start of the .m), there's a line that starts the gradient at straight-up (90°) rather than straight-right. You could change the code, both of the filter and of the kernel, to make this a parameter (like inputStartColor et al) that you could vary over time, using a CABasicAnimation or something similar.
You'll need to use a combination of axial and radial gradients and probably also some clipping paths. You can find all of this documented here.
You can also use colorWithPatternImage: to stroke any lines you're drawing.
I am using OpenGL ES 1.1 to draw lines in my iPad app. I want to make sure that the drawn lines are always visible on the screen regardless of the background colors, and without allowing the user to choose a color. Is there a blend function that will create this effect? So the color of the line drawn will change based on the colors already drawn beneath it and therefore always be visible.
Sadly the final blending of fragments into the framebuffer is still fixed function. Furthermore glLogicOp isn't implemented in ES so you can't do something cheap like XOR drawing.
I think the net effect is that:
you want the output colour to be a custom function of the colour already in the frame buffer;
but the frame buffer can't be read in a shader (it'd break the pipeline and lead towards concurrency issues).
You're therefore going to have to implement a ping pong pipeline.
You have two off-screen buffers. One represents what you output last frame, the other represents what you output the frame before that.
To generate a new frame you render using the one that represents the frame before as an input. Because it's an input you can sample it wherever you want and make whatever calculations you like on it. You render to the other buffer that you have (ie, the even older one) because you no longer care about its contents.
Then you copy all that to the screen and swap the two over, meaning that what you just drew is still in a texture to refer to as what you drew last frame. What you just referred to becomes your next drawing target because it's something you conveniently already have lying around.
So you'll be immediately interested in rendering to a texture. You'll also need to decide what function you want to use to pick a suitable 'different' colour to the existing background. Maybe just inverting it will do?
I think this could work:
glBlendFunc(GL_ONE_MINUS_DST_COLOR, GL_ZERO);
Draw your lines with a white color, and then the result will be rendered as
[1,1,1,1] * ( 1 - [DstR, DstG, DstB, DstA]) + ([DstR, DstG, DstB, DstA] * 0)
This should render a black pixel where the background is white, a white pixel where the background is black, a yellow pixel where the background is blue, etc.
I have been looking for the solution on the web for a long time. Most tutorials are fairly simple about adding shadow to a UIView. I also noticed that if we add a shadow to an UIImageView. The shadow shape could perfectly fit the shape of the content image if the image itself has alpha channel in it. Say for example, if the image is an animal with transparent background, the shadow shape is also the same as that animal (not a rectangle shadow as same as UIImageView frame).
But these are not enough. What I need to do is to add some changes to the shadow so it may have some rotation angle and compressed (squeezed or shift) effect so that looks like the sunlight comes from a certain spot.
To demonstrate what I need, I upload 2 images below, which I captured from the Google Map App created by Apple. You can imagine the Annotation Pin is an image which has the Pin shape, so the shadow is also "pin shaped", but it is not simply "offset" with a CGSize, you can see the top of the shadow is shifted right about 35 degrees and slightly squeezed the height.
When we tap and hold and pin, the shadow is also animated away from the pin, so I believe that such shadow can be made programmably.
The best shadow tutorial I can found so far is http://nachbaur.com/blog/fun-shadow-effects-using-custom-calayer-shadowpaths But unfortunately, that cannot make this effect.
If anyone know the answer or know any better words to search for, please let me know. Thank you.
(Please note that the shape of the image is dynamic in the App, so using any tool like Photoshop to pre-render the shadow is not an option.)
In order to create dynamic effects like this, you have to use Core Graphics. It's incredibly powerful once you know how to use it. Basically you need to set a skew transform on the context, set up a shadow and draw the image. You will probably have to use transparency layers as well.
It doesn't sound like you can use CALayer shadows, since that is meant to solve a specific use-case. The approach Apple takes with the pin marks on the map is to have two separate images that are created ahead of time (e.g. in Photoshop) and they position them within the map relative to a reference point.
If you really do need to do this at run-time, it should still be possible by using either Core Graphics or ImageKit. To get a blurred shadow appearance, you can use the kCICategoryBlur CIFilter. You can then convert the image to grayscale. And to get that compressed look you just need to resize and skew the image.
Once you have two separate images, you can either take the CGImageRef for the shadow image and can set that as the content of another sublayer, or you can add it as a separate view.
If you know what all the shapes are, you could just render a shadow image in Photoshop or something.
I have a CCSprite that was created from a png with transparent background.
I want to be able to apply colors to this sprite in a way that I`m free to define which color it is, without the actual color of the sprite affecting the amount of each color I have to add.
I`ve tried this:
mySprite.color = ccc3(200,200,255);
In an attempt to add a little blue-ish feel to my sprite, but as it works by setting the amount of tint that's gonna be displayed based on existant color of the sprite, and my sprite has virtually no blue in any of it (most of it is yellow) the resulting effect is pretty sketchy, everything gets really dark, and there is one slight blue-ish coloring, but not as I wanted.
The ideal effect for me on this case would be to ADD a light blue mask to it with very low alpha.
Is there an easy way to do that without composing sprites?
I've tried using CCTexture2D, but had no luck, as there is no built in method for working with colors, and most tutorials only teach you how to build textures out of image files.
This is deceptively hard to do in code with the original sprite. Another option would be:
create a new sprite, which is just a white outline version of your original sprite
the color property of this white sprite will now respond exactly to the RGB values you pass in
so pass in your light blue value to the white sprite and set the opacity correctly
then overlay it on your original sprite
Any good?
The only way you can achieve this is by overlaying (masking) the sprite with the proper OpenGL blend functions. When you say "add a light blue mask" then that's what you need to do. You may find this visual blendfunc tool helpful, if only to understand how blending with mask sprites works and what you can achieve with it. There's no built-in support for this by Cocos2D however, so you'll have to revert to pure OpenGL inside a cocos2d node's -(void) draw {} method.
Tinting (changing the color property) will only modify the RGB channels of the entire image by changing the vertex colors of all 4 vertices.
I would like to know how I can get the diameter (or radius) of an expanding circle animation at a at any point in time during the animation. I will end up stoping the animation right after I get the size as well, but figure I couldn't stop and remove it form the layer until I get the size of the circle.
For an example of how the expanding circle animation is implemented, it is a variation on the implementation shown in the addGrowingCircleAtPoint:(CGPoint)point method in the answer in the iPhone Quartz2D render expanding circle question.
I have tried to check various values on the layers, animation, etc but can't seem to find anything. I figure worse case I can attempt to make a best guess by taking the current time it is into its animation and use that to figure where it "should" be at based on its to and from size states. This seems like overkill for what I would assume is a value that is incrementing someplace I can just get easily.
Update:
I have tried several properties on the Presentation Layer including the Transform which never seems to change all the values are always the same regardless of what size the circle is at the time checked.
Okay here is how you get the current state of the an animation while it is animating.
While Rob was close he left out two pieces of key information.
First from the layer.presentationLayer.subLayers you have to get the layer you are animating on, which for me is the only sub layer available.
Second, from this sub layer you cannot just access the transform directly you have to do it by valueForKeyPath to get transform.scale.x. I used x because its a circle and x and y are the same.
I then use this to calculate the size of the circle at the time of the based on the values used to create the Arc.
I assume what you're trying to get to is the current CATransform3D, and that from that, you can get to your circle size.
What you want is the layer.presentationLayer.transform. See the CALayer docs for details on the presentationLayer. Also see the Core Animation Rendering Architecture.