I'm programming a nice little game that uses shader generated simplex noise for displaying on the fly computed random terrain.
I'm using Objective-C and Xcode 4 and I have gotten everything to run nicely using a subclass of NSOpenGLView. The subclass first compiles the shader and then renders a quad with the noise texture. The program has no problems running this at an acceptable speed (60Hz).
The subclass of NSOpenGLView uses an NSRunLoop to to fire a selector which in turn calls the drawRect:(NSRect)dirtyRect. This is done every frame.
Now, I want the shader to use a uniform that is updated each frame.
The shader should be able to react to a variable change that might occur every frame thus I'm trying to update the uniform at this frequency. The update of the uniform is done in the drawRect:(NSRect)dirtyRect function.
I am partially successful. The screen updates exactly as I'd like for the first 30 frames, then it stops updating the uniform even though I have a call to glUniform1f() and NSLog right next to each other and the NSLog always fires..!
The strange part is that if a hold space pressed (or any other key for that matter) the uniform is updated as it should be.
Clearly I am missing something here in regards to how OSX or OpenGL or something else handles uniforms.
An explanation of what might be ailing me would be appreciated but a pointer to where I can find information about this will suffice.
Update: After fiddling with glGetError() and glGetUniform*() I've noticed that the program works as intended when left alone. However, when I use the trackpad for input the uniform is reset to 0.000 while the rest of the program shows no errors.
First, have you tried calling glGetError() right after glUniform1f() to see what comes out?
I have a very limited knowledge of Mac OS programming (I did some iOS programming two years ago and forgot most of it since then) so the following is a wild guess.
Are you sure drawRect:(NSRect)dirtyRect is called by the same thread as the thread that owns the OpenGL context? As far as I know, OpenGL is not thread safe, so it could be that your glUniform1f() attempts are called from a different thread thus being unable to do anything.
Related
I've built a small Cocoa app that uses OpenGL to draw some light content. I've used CAOpenGLLayer. While the app itself is very small and works blazingly fast, the launch time of the app is not satisfactory at all. At random times the app would stall for about a second or two upon launch, before showing the content.
I've narrowed down the problem and found that the bottleneck is the CGLChoosePixelFormat() function that is called during OpenGL initialization. It literally takes ~1 second to execute.
For a clean experiment, I created a blank Cocoa app and added an NSOpenGLView to the window. Immediately, the app launch time has grown by 1-2 seconds, for the same reason.
Is there a way to fight this problem? There seems no way to avoid using CGLChoosePixelFormat(), it's essential for getting OpenGL to work on a Mac.
Also, it's said that Core Animation is built upon OpenGL under the hood, but my Core Animation apps do not exhibit this slow startup problem at all. Also, I tried a symbolic breakpoint on CGLChoosePixelFormat in a Core Animation app, but it doesn't trigger. So Core Animation is either not using OpenGL or there is a way to initialize it in a different way. Does anybody has a solution?
P.S. I know Metal is now the way to go for 3D graphics on Macs, but I need to do this particular project on OpenGL for backward compatibility reasons.
I've met the exactly same problem with you. So far I've found that the problem only occurs on MacBook Pro 16" 2019.
Since this problem is hard to reproduce, I can do nothing more than guess. For now I have changed the NSOpenGLPixelFormat initWithAttributes: method to [NSOpenGLView defaultPixelFormat], and hope it will work.
I have two issues in my Metal App.
My call to currentPassDescriptor is stalling. I have too many drawables, apparently.
I'm wholly confused on how to most performantly configure the multiple MTKViews I am using.
Issue (1)
I have a problem with currentPassDescriptor in my app. It is occasionally blocking (for 1.00s) which, according to the docs, is because there is no currentDrawable available.
Background: I have 4 HD 1920x1080 videos playing concurrently, tiled out onto a 3840x2160 second external display as a debugging configuration. The pixel buffers of these AVPlayer instances are captured by 4 independent CVDIsplayLink callbacks and, from within the callback, there is the draw call to its assigned MTKView. A total of 4 MTKViews are subviews tiled on a single NSWindow, and are configured for manual drawing.
I'm using CVDisplayLink callbacks manually. If I don't, then I get stutter when mousing up on the app’s menus, for example.
Within each draw call, I do a bit of kernel shader work then attempt to obtain the currentPassDescriptor. If successful, I do one pass of a fragment/vertex shader and then present the drawable. My code flow follows Apple’s sample code as well as published examples.
According to the Metal System Trace, most of draw calls take under 5ms. The GPU is about 20-25% utilized and there’s about 25% of the GPU memory free. I can also cause the main thread to usleep() for 1 second without any hiccups.
Without any user interaction, there’s about a 5% chance of the videos stalling out in the first minute. If there’s some UI work going then I see that as windowServer work in Instruments. I also note that AVFoundation seems to cache about 15 frames of video onto the GPU for each AVPlayer.
If the cadence of the draw calls is upset, there's about a 10% chance that things stall completely or some of the videos -- some will completely stall, some will stall with 1hz updates, some won't stall at all. There's also less chance of stalling when running Metal System Trace. The movies that have stalled seem to have done so on obtaining a currentPassDescriptor.
This is really a poor design to have this currentPassDescriptor block for ≈1s during a render loop. So much so that I’m thinking of eschewing the MTKView all together and just drawing to a CAMetalLayer myself. But the docs on CAMetalLayer seem to indicate the same blocking behaviour will occur.
I also grab these 4 pixel buffers on the fly and render sub-size regions-of-interest to 4 smaller MTKViews on the main monitor; but the stutters still occur if this code is removed.
Is the drawable buffer limit per MTKView or per the backing CALayer? The docs for maximumDrawableCount on CAMetalLayer say the number needs to be 2 or 3. This question ties into the configuration of the views.
Issue (2)
My current setup is a 3840x2160 NSWindow with a single content view. This subclass of NSView does some hiding/revealing of the mouse cursor by introducing an NSTrackingRectTag. The MTKViews are tiled subviews on this content view.
Is this the best configuration? Namely, one NSWindow with tiled MTKViews… or should I do one MTKView per window?
I'm also not sure how to best configure these windows/layers — ie. by setting (or clearing) wantsLayer, wantsUpdateLayer, and/or canDrawSubviewsIntoLayer. I'm currently just setting wantsLayer to YES on the single content view. Any hints on this would be great.
Does adjusting these properties collapse all the available drawables to the backing layer only; are there still 2 or 3 per MTKView?
NB: I've attached a sample run of my Metal app. The longest 'work' on the top graph is just under 5ms. The clumps of green/blue are rendering on the 4 MTKViews. The 'work' alternates a bit because one of the videos is a 60fps source; the others are all 30fps.
I am currently attempting to create an interactive, informative poster, with regards to Anti-Aliasing techniques and effects. The application is written in Obj-C within Xcode, and makes use of OpenGL and Cocoa functionalities.
I am attempting to create a small animation to display the difficulties of drawing a diagonal line on a pixel grid, however am having real trouble getting my head around the animation aspect.
I am aiming for something with a similar look and feel to this:
I have currently drawn a grid using OpenGL primitives:
,
and would like the effect above to be replicated within my grid, however without the shading yet (that is the next part), so just plain black pixels coloured step by step along the line.
I am new to both OpenGL and Obj-C, so am unsure whether best to implement the animation within OpenGL, or using OSx Core Animation - neither of which I have used before.
The OpenGL drawing takes place within my MyOpenGLView class, with the drawing done in a drawAnObject method, which is then called within the drawRect method.
Any help would be much appreciated, thanks in advance!
I've been working on implementing a real time Core Plot graph into my application on OS X. To my dismay I noticed a fairly significant issue. Once the line gets to the end of the X-Axis and it starts scrolling to keep up with the line the CPU load hits 30-35% non-stop.
I figured before I proceed any further I had better go back and see if I had made some type of mistake in my code for the CPU to spike like that. There wasn't anything out of the ordinary that I noticed, and I tried to adjust the framerate and updating frequency but without luck. I decided to go back to the real time example project they include and it has the same effect on the CPU.
Is there anything I can do about this, or is that just the nature of
real time graphing on OS X?
. .
Everything is fine for the first 50 frames (indicated by the line with arrows), but once it gets to the end of it that's where things turn for the worse.
Side Note:
I noticed Swift does graphing in the Playground, and even though it's apparently not real time (and I'm using Obj-C) it looks really sharp. Is the Swift graphing feature only available within playgrounds, or is there a way to implement that into a project? I'm only mentioning this because I'm looking to find something soon that is efficient.
That's the expected behavior with Core Plot. Once the graph starts to scroll, it has to redraw the plot, both axes, and all of the grid lines for each animation frame. You could reduce the drawing load by decreasing the number of grid lines and/or axis tick marks.
The playground graphs are a private part of the playground environment.
I`m a little confused about this point.
Everything that I found in books, blogs, forums and even in OpenGl specs just talk about a very abstract techniques. Nothing about real world examples.
And I`m going crazy with this: How to put and manage multiple objects (meshes) with OpenGL ES 2.x?
In theory seems simple. You have a Vertex Shader (vsh) and Fragment Shader (fsh), then you bind the both to one Program(glGenProgram, glUseProgram, ...). In the every cycle of render, that Program will perform its VSH by each Vertex and after this will perform FSH on every "pixel" of that 3d object and finally send the final result to the buffer (obviously without talk about the per-vertex, rasterization, and other steps in the pipeline).
OK, seems simple...
All this is fired by a call to the draw function (glDrawArrays or glDrawElements).
OK again.
Now the things comes confused to me.
And If you have several objects to render?
Let's talk about a real world example.
Imagine that you have a landscape with trees, and a character.
The grass of the landscape have one texture, the trees have texture to the trunk and leaves (Texture Atlas) and finally the character has another texture (Texture Atlas) and is animated too.
After imagine this scene, my question is simple:
How you organize this?
You create a separated Program (with one VSH and FSH) for each element on the scene? Like a Program to the grass and soil's relief, a Program to the trees and a Program to the character?
I've tried it, but... when I create multiple Programs and try to use glVertexAttribPointer() the textures and colors of the objects enter in conflicts with each others. Because the location of the attributes, the indexes, of the first Program repeat in the second Program.
Let me explain, I used glGetAttribLocation() in one class that controls the floor of the scene, so the OpenGL core returned to me the index of 0,1 and 2 for the vertexes attributes.
After, in the class of trees I created another Program, anothers shaders, and after used again the glGetAttribLocation() at this time the OpenGL core return with indexes of 0, 1, 2 and 3.
After in the render cycle, I started setting the first Program with glUseProgram() and I've made changes to its vertexes attributes with glVertexAttribPointer() and finally a call to glDrawElements(). After this, call again glUseProgram() to the second Program and use glVertexAttribPointer() again and finally glDrawElements().
But at this point, the things enter in conflicts, because the indexes of vertexes attributes of the second Program affects the vertexes of the first Program too.
I'm tried a lot of thing, searched a lot, asked a lot... I'm exhausted. I can't find what is wrong.
So I started to think that I'm doing everything wrong!
Now I repeat my question again: How to work with multiple meshes (with different textures and behavior) in OpenGL ES 2.x? Using multiple Programs? How?
To draw multiple meshes, just call glDrawElements/glDrawArrays multiple times. If those meshes require different shaders, just set them. ONE, and only ONE shader program is active.
So each time you change your shader program (Specifically the VS), you need to reset all vertex attributes and pointers.
Just simple as that.
Thanks for answer,
But I think that you just repeat my own words... about the Draw methods, about one Program active, about everything.
Whatever.
The point is that your words give me an insight!
You said: "you need to reset all vertex attributes and pointers".
Well... not exactly reseted, but what I was not updating ALL vertex attributes on render cycle, like texture coordinates. I was updating just that attributes that change. And when I cleared the buffers, I lost the older values.
Now I start to update ALL attributes, independent of change or not their values, everything works!
See, what I had before is:
glCreateProgram();
...
glAttachShader();
glAttachShader();
...
glLinkProgram();
glUseProgram();
...
glGetAttribLocation();
glVertexAttribPointer();
glEnableVertexAttribArray();
...
glDrawElements();
I repeated the process to the second Program, but just call glVertexAttribPointer() a few times.
Now, what I have is a call to glVertexAttribPointer() for ALL attributes.
What drived me crazy is the point that if I removed the First block of code to the first Program, the whole second Program worked fine.
If I removed the Second block of code to the second Program, the first one worked fine.
Now seems so obvious.
Of course, if the VSH is a per-vertex operation, it will work with nulled value if I don't update ALL attributes and uniforms.
I though about OpenGL more like a 3D engine, that work with 3d objects, has a scene where you place your objects, set lights. But not... OpenGL just know about triangles, lines and points, nothing more. I think different now.
Anyway, the point is that now I can move forward!
Thanks