I want to draw a circle in my application.
Actually, there is a timer for 20 seconds and I have to draw a circle with red and green colors, according to the remaining time.
Please help me if you have code or similar examples.
To draw a circle you may use (in your drawRect method)
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextBeginPath (context);
CGContextAddArc (context, CENTER_X, CENTER_Y, RADIUS, 0, 2*M_PI, 0);
CGContextDrawPath (context, kCGPathFillStroke);
To simulate the timer you may consider use CGContextAddLineToPoint and CGContextMoveToPoint to draw lines and CGContextSetFillColor to change the current fill color.
Check CGContext Reference
This is not a complete answer, but I hope it helps: You probably want to look at the documentation for the following functions:
CGContextAddArc
CGContextAddArcToPoint
CGContextFillEllipseInRect
CGContextStrokeEllipseInRect
And a Google search for those functions will probably find some useful sample code.
I know MBProgressHUD on GitHub has that ability. I have used it before and it is very easy to implement.
Related
Hey so I'm pretty new to ObjC and coding in general. Essentially I want to make a circle move across a UIBezier path (close to a sin function) one 'unit' every hour and make its shadow small and yellow, then brigger and white, then small and dim again as it moves up and then down the curve. The crest of the sin function should be midday (noon), and the 'tails' of the curve should midnight on both sides. Is this even possible? And where can I find the resources to help me? Couldn't seem to find anything online to help me since I don't know what I need to achieve this. Thanks!
You use the CAKeyFrameAnimation class to move the view along a path. You create a path, and animate the position property of the view's layer. An example of this is in Apple's Core Animation documentation in the "Using a Keyframe Animation to Change Layer Properties" section. The other things you want to do with the shadow, can be done with CABasicAnimation. You can animate a shadow's color, offset, radius, path, and opacity.
I have an iOS OpenGL ES 2.0 3D game and am working to get transparent textures working nicely, in this particular example for a fence.
I'll start with the final result. The bits of green background/clear color are coming through around the edges of the fence - note how it isn't ALL edges and some of it is ok:
The reason for the lack of bleed in the top right is order of operations. As you can see from the following shots, the order of draw includes some buildings that get drawn BEFORE the fence. But most of it is after the fence:
So one solution is to always draw my transparent textured objects last. I would like to explore other solutions, as my pipeline might not always allow this. I'm looking for other suggestions to solve this problem without sorting my draws.
This is likely a depth or blend function, but i've tried a ton of stuff and nothing seems to work (different blend functions, different discard alpha levels, different background colors, different texture settings).
Here are some specifics of my implementation.
In my frag shader I'm throwing out fragments that have transparency - this way they won't render to depth:
lowp vec4 texVal = texture2D(sTexture, texCoord);
if(texVal.w < 0.5)
discard;
I'm using one giant PVR texture atlas with mipmapping - the texture itself SHOULD just have 0 or 1 for alpha, but something with the blending could be causing this:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
I'm using the following blending when rendering:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Any suggestions to fix this bleed would be great!
EDIT - tried a different min filter for the texture as suggested in the comments, LINEAR/NEAREST, but same result. Note I have also tried NEAREST/NEAREST and no luck:
try increasing the alpha filter limit,
lowp vec4 texVal = texture2D(sTexture, texCoord);
if(texVal.w < 0.9)
discard;
I know this is an old question but I came across it several times whilst trying to find an answer to my very similar OpenGL issue. Thought I'd share my findings here for anyone with similar. The culprit in my code looked like this:
glClearColor(1, 0, 1, 0);
glClear(GL_COLOR_BUFFER_BIT);
I used a pink transparent colour for ease of visual reference whilst debugging. Despite the fact it was transparent when it was blending between background and the colour of the subject it would bleed in much like the symptoms in the screenshot of the question. What fixed it for me was wrapping this code to mask the glClear step. It looked like this:
glColorMask(false, false, false, true);
glClearColor(1, 0, 1, 0);
glClear(GL_COLOR_BUFFER_BIT);
glColorMask(true, true, true, true);
To my knowledge, this means when the clear process kicks in it only operates on the alpha channel. After this is was all re-enabled to continue the process as intended. If someone with a more solid knowledge of OpenGL can explain it better I'd love to hear!
I am using CIContext method - (void)drawImage:(CIImage *)im inRect:(CGRect)dest fromRect:(CGRect)src to draw my image to screen. But I need to implement zoom-in/zoom-out method. How could I achieve it? I think zoom-in could be achieved increasing dest rect, because apple docs says:
The image is scaled to fill the destination rectangle.
But what about zoom-out? Because if dest rectangle is scaled down, then image is drawn in it's actual size, but only part of image is visible then (part that fits in dest rectangle).
What could you suggest?
You may try using this for image resizing (zooming). Hope this helps you.
Take a look at this little toy app I made
It's to demonstrate the NSImage version of your CIContext method:
- (void)drawInRect:(NSRect)dstRect
fromRect:(NSRect)srcRect
operation:(NSCompositingOperation)op
fraction:(CGFloat)delta
I did this to find out exactly how the rects relate to each other. It's interactive, you can play with the sliders and move/zoom the images. Not a solution, but it might help you work things out.
You can use CIFilter to resize your CIImage before drawing. Quartz Composer comes with a good example of using this filter. Just look up the description of the Image Resize filter in QC.
EDIT:
Another good filter for scaling is CILanczosScaleTransform. There is a snippet demonstrating basic usage.
I am trying to draw some text via Quartz onto an NSView via CGContextShowTextAtPoint(). This worked well until I overrode (BOOL)isFlipped to return YES in my NSView subclass in order to position the origin in the upper-left for drawing. The text draws in the expected area but the letters are all inverted. I also tried the (theoretically, at least) equivalent of flipping my CGContext and translating by the context's height.
e.x.
// drawRect:
CGContextScaleCTM(theContext, 1, -1);
CGContextTranslateCTM(theContext, 0, -dirtyRect.size.height);
This yields the same result.
Many suggestions to similar problems have pointed to modifying the text matrix. I've set the text matrix to the identity matrix, performed an additional inversion on it, and done both, respectively. All these solutions have lead to even stranger rendering of the text (often just a fragment shows up.)
Another suggestion I saw was to simply steer clear of this function in favor of other means of drawing text (e.x. NSString's drawing methods.) However, this is being done amongst mostly C++ / C and I'd like to stay at those levels if possible.
Any suggestions are much appreciated and I'd be happy to post more code if needed.
Thanks,
Sam
This question has been answered here.
Basically it's because the coordinate system on iOS core graphics is fliped (x:0, y:0 in the top left) opposed to the one on the Mac (where x:0, y:0 is bottom left). The solution for this is setting the text transform matrix like this:
CGContextSetTextMatrix(context, CGAffineTransformMake(1.0,0.0, 0.0, -1.0, 0.0, 0.0));
You need to use the view's bounds rather than the dirtyRect and perform the translation before the scale:
CGContextTranslateCTM(theContext, 0, -NSHeight(self.bounds));
CGContextScaleCTM(theContext, 1, -1);
Turns out the answer was to modify the text matrix. The weird "fragments" that were showing up instead of the text was because the font size (set via CGContextSelectFont()) was too small when the "default" text matrix was replaced. The initial matrix had, for some reason, a large scale transform so smaller text sizes looked fine when the matrix was unmodified; when replaced with a inverse scale (1, -1) or an identity matrix, however, they would become unreadably small.
hi
i have a background pattern image (vertical lines) like:
llllllllllllllllllllllllll
the line is a pattern image:
bgimg.backgroundColor = [UIColor colorWithPatternImage:[UIImage imageNamed:#"pd_line2.png"]];
my problem:
i would like to reframe the bgimg - it should repeat the line on x-axis: but stretch the line on y-axis.
next problem: the pattern image also uses an alpha channel.
is this possible?
can someone please give me a hint?
thanks in advance
Well you can certainly scale the image in various ways via GCContext functions, but perhaps the easier way would be making your view transparent (clearColor background) and adding an extra UIImageView (with pd_line2.png in it) underneath.
UIImageView's built-in stretching options may be enough to achieve what you need without any coding.