OpenGL - animation stuttering when in full screen - vb.net

I'm currently running into a problem regarding animation in OpenGL. I have between 200 and 10000 gears on the screen at a time all rotating. When the window is not in maximized view, my CPU runs at about 10-20 % consistently. No spikes, no stuttering in the animation, it runs perfectly smooth regardless of the number of gears on screen. When I maximize the window though, everything falls apart. My CPU maxes out, I begin getting weird spikes in CPU usage, the animation begins stuttering as a result, and it just looks really ugly, even when I have only 200 gears on screen.
My animation technique looks like this:
While Animating
Calculate current rotation angle based on a running timer
draw Image
call glFlush()
End While
If it helps, I'm using the Tao framework in VB.net. I'm not performing any other calculations other than the ones to calculate the rotation angle mentioned above and perform a few glRotateD and glscaleD in the method to draw the image.
In addition, I guess I was under the impression that regardless of the window size in an orthographic 2-dimensional drawing that is scaling on resizing of the window, the drawing time would always take the same amount of time. Is this a correct assumption?
Any help is greatly appreciated =)
Edit
Note that I've seen the animation run perfectly smooth at full screen before. Every once in awhile, OpenGL will decide it's happy and run perfectly at full screen using between 10-20% of the CPU (same as when not maximized). I haven't pinpointed what causes this though, because it will run one time perfectly, then without changing anything, I will run it again and encounter the choppiness. I simply want to pinpoint what causes the animation to slow down and eliminate it.
I've run a dot trace on my program and it says that the swapBuffers method is using 55 % of my processing time even though I'm never explicitly calling the method. Is the method called by something else that I can eliminate, or is this simply OpenGL's "dead time" method to limit the animation to 60 fps?

I was under the impression that regardless of the window size in an orthographic 2-dimensional drawing that is scaling on resizing of the window, the drawing time would always take the same amount of time. Is this a correct assumption?
If only :)
More pixels require more memory bandwidth/shader units/etc. Look into fillrate.

Related

GLUT animation leads to 100% utilization of 1 core when the window is invisible

I developed a Python program that uses PyOpenGL and GLUT for window management to show an animation. In order to have the animation run at the fastest possible framerate, I set
glutIdleFunc(glutPostRedisplay)
as recommended e.g. here.
That works well, I get a steady 60 FPS with not a lot of CPU load.
However, as soon as the window is hidden by another window, one CPU core jumps to 100% utilization.
My suspicion is that while the window is visible, the rate at which the glutDisplayFunc is called is limited, because it contains a call glutSwapBuffers() which waits for vsync; and that this limitation fails when it is invisible.
I tried to solve the problem by keeping track of visibility (through a glutVisibilityFunc) and putting the following code at the beginning of my glutDisplayFunc:
if not visible:
time.sleep(0.1)
return
This does not however have the desired effect.
What's happening here, and how do I avoid it?
I found the solution here,
and it is obvious once you know it: Disable the glutPostRedisplay as glutIdleFunc when the window becomes invisible. Concretely, use a glutVisibilityFunc like this:
def visibility(state):
if state == GLUT_VISIBLE:
glutIdleFunc(glutPostRedisplay)
else:
glutIdleFunc(None)

Objective-C, Core-Plot Real Time Graph vs CPU

I've been working on implementing a real time Core Plot graph into my application on OS X. To my dismay I noticed a fairly significant issue. Once the line gets to the end of the X-Axis and it starts scrolling to keep up with the line the CPU load hits 30-35% non-stop.
I figured before I proceed any further I had better go back and see if I had made some type of mistake in my code for the CPU to spike like that. There wasn't anything out of the ordinary that I noticed, and I tried to adjust the framerate and updating frequency but without luck. I decided to go back to the real time example project they include and it has the same effect on the CPU.
Is there anything I can do about this, or is that just the nature of
real time graphing on OS X?
. .
Everything is fine for the first 50 frames (indicated by the line with arrows), but once it gets to the end of it that's where things turn for the worse.
Side Note:
I noticed Swift does graphing in the Playground, and even though it's apparently not real time (and I'm using Obj-C) it looks really sharp. Is the Swift graphing feature only available within playgrounds, or is there a way to implement that into a project? I'm only mentioning this because I'm looking to find something soon that is efficient.
That's the expected behavior with Core Plot. Once the graph starts to scroll, it has to redraw the plot, both axes, and all of the grid lines for each animation frame. You could reduce the drawing load by decreasing the number of grid lines and/or axis tick marks.
The playground graphs are a private part of the playground environment.

UIView animateWithDuration: slows down animation frame rate

I am using CADisplayLink to draw frames using the EAGLView method in a game at 60 times per second.
When I call UIView animateWithDuration: the framerate drops down to exactly half, from 60 to 30 fps for the duration of the animation. Once the animation is over, the fps rises instantly back up to 60.
I also tried using NSTimer animation method instead of CADisplayLink and still get the same result.
The same behavior happens when I press the volume buttons while the speaker icon is fading out, so it may be using animateWithDuration. As I would like to be able to handle the speaker icon smoothly in my app, this means I can't just rewrite my animation code to use a different method other than animateWithDuration, but need to find a solution that works with it.
I am aware that there is an option to slow down animations for debug on the simulator, however, I am experiencing this on the device and no such option is enabled. I also tried using various options for animateWithDuration such as the linear one and the user interaction, but none had an improvement.
I am also aware I can design an engine that can still work with a frame rate that varies widely. However, this is not an ideal solution to this problem, as high fps is desirable for games.
Has someone seen this problem or solved it before?
The solution to this is to do your own animation and blit during the CADisplayLink callback.
1) for the volume issue, put a small volume icon in the corner, or show it if the user takes some predefined touch action, and give them touch controls. With that input you can use AVAudioPlayer to vary the volume, and just avoid the system control altogether. you might even be able to determine the user has pressed the volume buttons, and pop some note saying do it your way. This gets you away from any animations happening by the system.
2) When you have an animation you want to do, well, create a series of images in code (either then or before hand), and every so many callbacks in the displayLink blit the image to the screen.
Here's an old thread that describes similar drops in frame rate. In that case, the cause of the problem was adding two or more semi-transparent sprites, but I'd guess that any time you try to composite several layers together you may be doing enough work to cut the frame rate, and animateWithDuration very likely does exactly that kind of thing.
Either use OpenGL or CoreAnimation. They are not compatible.
To test this remove any UIView animation, the frame rate will be what you expect. Add back UIView animation, it will drop to 30fps.
You said:
When I call UIView animateWithDuration: the framerate drops down to exactly half, from 60 to 30 fps for the duration of the animation. Once the animation is over, the fps rises instantly back up to 60
I dont know why your not accepting my answer, this is exactly what happens when you combine UIView animation with CA animation not using a UIView.

OpenGL ES Graphics issue when not calling glClear()

I'm working on an iPad app that has a few thousand particles that the user can manipulate with touches. To produce interesting designs, I want to make it so that when a particle is drawn in a location, that drawing is not cleared on the next frame. This creates a sort of "trails" effect. At the moment I'm doing this by when "trails" is turned on, glClear() is not called each frame, so drawing from each frame is added to the drawing of the previous frame. This works fine in the iPad simulator, but for some reason, when I run this on an actual device, when I turn trails on the particle trails flicker like there's something weird going on with the buffers.
Is there a better way to produce trails / why does this graphics problem only occur in the simulator?
Thanks!
glClear() is called between buffers so that you can begin to draw the next one on a clean slate - you really need to clear the buffer between frames. Its not good practice to continue to fill up the buffer as you can start producing artifacts (as you are noticing).
To produce the trailing effect, you would probably want to use additional particles. Keep track of the particle's position or velocity, and then draw additional particles on the trail.

General iOS graphics efficiency

I'm working on a simple program that has 500 "particles" that have an x and a y coordinate. They move around the screen and respond to touches. As I go past 500 particles the app starts running much slower. Using CPU sampler I discovered that drawing the particles is taking up the most CPU time.
This is the drawing code:
CGContextSetFillColorWithColor(context, [UIColor colorWithRed:red/255 green:green/255 blue:blue/255 alpha:1].CGColor);
CGRect rectangle = CGRectMake(xpos,ypos,9,9);
CGContextAddEllipseInRect(context, rectangle);
CGContextFillPath(context);
red,green,and blue are floats used to change the color of the particles based on their speed, but this isn't the problem.
This is how I was taught to use Quartz and it works just fine for most drawing, but this code is executed 500+ times and the game starts slowing down. I've run the program with CPU sampler with the drawing code commented out and there is hardly any CPU usage despite all the math going on in the background.
Is there a more efficient way to draw circles in iOS?
You can try two different approaches to help speed up performance...
Use prerendered UIImage/CGImage instead of points (won't give you the ability to change colors/sizes dynamically, but maybe you only need a limited range for your app)
Use OpenGL, GL_POINTS
Quartz is generally slower than OpenGL especially for path based drawing from all the research I've done on the IPhone. Refer to the IPhone Dev forums and you'll see a general consensus about this.
Making a layer (CALayer) for each particle might actually make sense. In general, doing drawing "yourself" in -drawRect: is the path to slowness on iOS. Avoid it if at all possible.