draw with opengl less than every frame - objective-c

Is their any way, in cocos2d, to have a cclayer draw via opengl less frequently than every frame? I have tried:
-(void) draw
{
glEnable(GL_LINE_SMOOTH);
if (iShouldUpdate) {
ccDrawLine(ccp(50,50), ccp(200,200));
iShouldUpdate = false;
}
}
-(void) updateTheMap
{
iShouldUpdate = true;
}
and then call: updateTheMap whenever needed, but it just displays for 1 frame.
Thanks.

Yes and no.
Normally the frame contents are cleared before a new frame renders. Every frame begins at a clean state. Now if you were to disable clearing of the screen, you could draw something once and it would stay on screen. But then you wouldn't be able to move what you've drawn without clearing exactly just the parts that you've drawn.
Since this gets overly complex and error prone really quickly, the standard way for games has been ever since there are game engines to clear the frame contents before beginning to draw a new frame. One notable exception being DooM, where it was assumed by the engine developer that every pixel on screen would be updated every frame - unless there were missing textures in which case you could see the famous Halls of Mirrors (HOM) effect that occurs when you're not clearing the framebuffer every frame:
So the standard is to draw the entire screen contents again every frame. Because of that, you can't draw something just every couple of frames because if you do that, then whatever you've drawn will be visible on screen for one frame and then it'll be gone.
In summation: you have to draw everything that should be visible on screen repeatedly every frame. That's the way almost all game engines work.

Related

Metal -- skipping commandBuffer.present(drawable) to not display a frame?

In my Metal app for macOS, I have a situation where I only want to display the render results every so often. I want to complete the rendering pass every frame, and save the drawable texture image to a file, but I only want to display the render every sixteenth frame or so. I tried just skipping commandBuffer.present(drawable) when I don't want to display, but it is not working. It just stops displaying new frames once I do that. After skipping one call to commandBuffer.present(), it just doesn't display any new frames. It does continue to run, however.
Why would that happen? Once I commit a command buffer, is it required for it to be presented?
If I can't get this to work, then I will try to render into an offscreen buffer for these frames I don't want displayed. But it would be extra work and require more memory for the offscreen render buffer, so I'd rather just be able to use my regular onscreen render buffer if possible.
Thanks!
It's not required that a command buffer present a drawable. I think the issue is that, once you've obtained the drawable, it's not returned to the pool maintained by the CAMetalLayer (or, indirectly, MTKView) that provided it until it is presented.
Do not render to a drawable's texture if you don't plan on presenting. Rendering to an off-screen texture is the right approach. In fact, if you always render first to an off-screen texture and then, only for the frames you want to display, copy that to a drawable's texture, then you can leave the framebufferOnly property of the CAMetalLayer with its default true value. In that case, there's a decent chance that you won't increase the memory required (because the drawable's texture is really just part of the screen's backing store).

Find out when SCNNode disappears from SCNScene

Does a particular method get called when an SCNNode is removed from the scene?
-(void)removeFromParentNode;
Does not get called on the SCNNode object.
To set the scene
I am using gravity to pull down an object. When an object goes too far down, it automatically disappears and the draw calls and polygon counts decrease. So the SCNNode is definitely being destroyed, but is there a way I could hook into the destruction?
Other answers covered this pretty well already, but to go a bit further:
First, your node isn't being removed from the scene — its content is passing outside the camera's viewing frustum, which means SceneKit knows it doesn't need to issue draw calls to the GPU to render it. If you enumerate the child nodes of the scene (or of whatever parent contains the nodes you're talking about), you'll see that they're still there. You lose some of the rendering performance cost because SceneKit doesn't need to issue draw calls for stuff that it knows won't be visible in the frame.
(As noted in Tanguy's answer, this may be because of your zFar setting. Or it may not — it depends which direction the nodes are falling out of camera in.)
But if you keep adding nodes and letting physics drop them off the screen, you'll accumulate a pre-render performance cost, as SceneKit has to walk the scene graph every frame and figure out which nodes it'll need to issue draw calls for. This cost is pretty small for each node, but it could eventually add up to something you don't want to deal with.
And since you want to have something happen when the node falls out of frame anyway, you just need to find a good opportunity to both deal with that and clean up the disappearing node.
So where to do that? Well, you have a few options. As has been noted, you could put something into the render loop to check the visibility of every node on every frame:
- (void)renderer:(id<SCNSceneRenderer>)renderer didSimulatePhysicsAtTime:(NSTimeInterval)time {
if (![renderer isNodeInsideFrustum:myNode withPointOfView:renderer.pointOfView]) {
// it's gone, remove it from scene
}
}
But that's a somewhat expensive check to be running on every frame (remember, you're targeting 30 or 60 fps here). A better way might be to let the physics system help you:
Create a node with an SCNBox geometry that's big enough to "catch" everything that falls off the screen.
Give that node a static physics body, and set up the category and collision bit masks so that your falling nodes will collide with it.
Position that node just outside of the viewing frustum so that your falling objects hit it soon after they fall out of view.
Implement a contact delegate method to destroy the falling nodes:
- (void)physicsWorld:(SCNPhysicsWorld *)world didBeginContact:(SCNPhysicsContact *)contact {
if (/* sort out which node is which */) {
[fallingNode removeFromParentNode];
// ... and do whatever else you want to do when it falls offscreen.
}
}
Your object will disappear if it goes further than the ZFar property of your active camera. (default value is 100.0)
As said by David Rönnqvist in comments, your Node is not destroyed and you can still modify its property.
If you want to hook-up to your Node's geometry disappearance, you can calculate the distance between your active camera and your Node and check it every frame in your rendering loop to trigger an action if it gets higher than 100.
If you want to render your Node at a greater distance, you can just modify the ZFar property of your camera.

ipad frame max size is not enough

I'm developing an ipad application about 2d drawing.
I need a UIView.frame size of 4000x4000. But if I set a frame with size 4000x4000 the application
crash since i get memory warning.
Right night I'm using 1600*1000 frame size and the user can add new object (rectangle) on frame. User can also translate fram along x and y axis using pan gesture in order to see or add new object.
Have you got some suggestion? how can I tackle this problem?
thanks
Well, I would suggest what is used in video games for a long time - creating a tiled LOD mechanism, where only when you zoom in toward specific tiles, they are rendered at an increasing resolution, while when zoomed out, you only render lower resolution.
If the drawing in based on shapes (rectangles, points, lines, or anything can be represented by simple vector data) there is no reason to create a UIView for the entire size of the drawing. You just redraw the currently visible view as the user pans across the drawing using the stored vector data. There is no persistent bitmapped representation of the drawing.
If using bitmap data for drawing (i.e. a Photoshop type of app) then you'll likely need to use a mechanism that caches off-screen data into secondary storage and loads it back onto the screen as the user pans across it. In either case, the UIView only needs to be as big as the physical screen size.
Sorry I don't have any iOS code examples for any of this - take this as a high-level abstraction and work from there.
Sounds like you want to be using UIScrollView.

Redraw old buffer question

My main scene is composed of GL_POINTS in 3D space. What I would like to do is be able to draw a single GL_LINES line (2d overlay) on top of the scene as the user moves his finger across the screen while retaining the underlaying 3D GL_POINTS state. I am having trouble understanding if this is possible. Do I need 2 framebuffers? How do I save the previous framebuffer data of GL_POINTS and re-render that in subsequent frames? Do I need to mix framebuffers - one for the GL_LINE layer and one for the GL_POINTS data?
I tried only calling presentFramebuffer without calling setFramebuffer but that is retaining each GL_LINES drawn from previous frames - which I do not want. How do I retain parts of the framebuffer and remove other parts?
you do not need a 2 frame-buffers at all
frame buffer is your screen memory
just render all stuff on that one you have
if you mean by frame-buffer VBO (VertexBufferObject) then they are not the same at all
if you render the same data (vertexes)
then you need just 1x VBO
and call glDrawArrays/glDrawElements twice
once with GL_POINTS and once with GL_LINES/GL_LINE_LOOP or whatever
if you render different data
then you need 2x VBO
or if there are only few Lines then you can still use glBegin/glEnd for them instead.
if you just need separate areas of view then you can use
clipping, change view-port, overwrite borders by quads, ...
draw to texture, and so on ... there is a lot of options more there

In a double buffer opengl context, is it possible that front and back buffer be the same?

I have a situation in which I ask and get a double-buffering OpenGL context, but when I draw in it, both the front and back buffer are affected. The draw buffer is set to the back buffer (And only the back buffer). If I look in OpenGL Profiler, I do see all that: the value for GL_DRAW_BUFFER (GL_BACK) and the actual back and front buffer being drawn to.
Since I'm working with an NSWindow that has a backing store, We do not see any of this happening on the screen. The problem is that I'm getting screenshots of this window with CGWindowListCreateImage. This function seems to be fetching the image from the front buffer, and not from the screen buffer (Wherever that is...). So the image returned is incomplete: it only contains the elements that are drawn at the moment it is grabbed, even if no flush has been called.
There is a utility in the mac developer package called Pixie. It basically grab the screen at the mouse position, and display it zoomed in so you can analyze it. This program has the same behavior than calling CGWindowListCreateImage: you can see incomplete images. So I guess the problem is not with the way I use CGWindowListCreateImage, but rather with my window or my display...
Also, It does not seems to happen all the time. Not every windows show this behavior, and even for a given window, it seems to come and go, especially if I move the window to a different screen (In a dual display).
Anyone faced this before?