I`m a little confused about this point.
Everything that I found in books, blogs, forums and even in OpenGl specs just talk about a very abstract techniques. Nothing about real world examples.
And I`m going crazy with this: How to put and manage multiple objects (meshes) with OpenGL ES 2.x?
In theory seems simple. You have a Vertex Shader (vsh) and Fragment Shader (fsh), then you bind the both to one Program(glGenProgram, glUseProgram, ...). In the every cycle of render, that Program will perform its VSH by each Vertex and after this will perform FSH on every "pixel" of that 3d object and finally send the final result to the buffer (obviously without talk about the per-vertex, rasterization, and other steps in the pipeline).
OK, seems simple...
All this is fired by a call to the draw function (glDrawArrays or glDrawElements).
OK again.
Now the things comes confused to me.
And If you have several objects to render?
Let's talk about a real world example.
Imagine that you have a landscape with trees, and a character.
The grass of the landscape have one texture, the trees have texture to the trunk and leaves (Texture Atlas) and finally the character has another texture (Texture Atlas) and is animated too.
After imagine this scene, my question is simple:
How you organize this?
You create a separated Program (with one VSH and FSH) for each element on the scene? Like a Program to the grass and soil's relief, a Program to the trees and a Program to the character?
I've tried it, but... when I create multiple Programs and try to use glVertexAttribPointer() the textures and colors of the objects enter in conflicts with each others. Because the location of the attributes, the indexes, of the first Program repeat in the second Program.
Let me explain, I used glGetAttribLocation() in one class that controls the floor of the scene, so the OpenGL core returned to me the index of 0,1 and 2 for the vertexes attributes.
After, in the class of trees I created another Program, anothers shaders, and after used again the glGetAttribLocation() at this time the OpenGL core return with indexes of 0, 1, 2 and 3.
After in the render cycle, I started setting the first Program with glUseProgram() and I've made changes to its vertexes attributes with glVertexAttribPointer() and finally a call to glDrawElements(). After this, call again glUseProgram() to the second Program and use glVertexAttribPointer() again and finally glDrawElements().
But at this point, the things enter in conflicts, because the indexes of vertexes attributes of the second Program affects the vertexes of the first Program too.
I'm tried a lot of thing, searched a lot, asked a lot... I'm exhausted. I can't find what is wrong.
So I started to think that I'm doing everything wrong!
Now I repeat my question again: How to work with multiple meshes (with different textures and behavior) in OpenGL ES 2.x? Using multiple Programs? How?
To draw multiple meshes, just call glDrawElements/glDrawArrays multiple times. If those meshes require different shaders, just set them. ONE, and only ONE shader program is active.
So each time you change your shader program (Specifically the VS), you need to reset all vertex attributes and pointers.
Just simple as that.
Thanks for answer,
But I think that you just repeat my own words... about the Draw methods, about one Program active, about everything.
Whatever.
The point is that your words give me an insight!
You said: "you need to reset all vertex attributes and pointers".
Well... not exactly reseted, but what I was not updating ALL vertex attributes on render cycle, like texture coordinates. I was updating just that attributes that change. And when I cleared the buffers, I lost the older values.
Now I start to update ALL attributes, independent of change or not their values, everything works!
See, what I had before is:
glCreateProgram();
...
glAttachShader();
glAttachShader();
...
glLinkProgram();
glUseProgram();
...
glGetAttribLocation();
glVertexAttribPointer();
glEnableVertexAttribArray();
...
glDrawElements();
I repeated the process to the second Program, but just call glVertexAttribPointer() a few times.
Now, what I have is a call to glVertexAttribPointer() for ALL attributes.
What drived me crazy is the point that if I removed the First block of code to the first Program, the whole second Program worked fine.
If I removed the Second block of code to the second Program, the first one worked fine.
Now seems so obvious.
Of course, if the VSH is a per-vertex operation, it will work with nulled value if I don't update ALL attributes and uniforms.
I though about OpenGL more like a 3D engine, that work with 3d objects, has a scene where you place your objects, set lights. But not... OpenGL just know about triangles, lines and points, nothing more. I think different now.
Anyway, the point is that now I can move forward!
Thanks
Related
My background is in OpenGL and I'm attempting to learn Vulkan. I'm having a little trouble with setting up a class so I can render multiple objects with different textures, vertex buffers, and UBO values. I've run into an issue where two of my images are drawn, but they flicker and alternate. I'm thinking it must be due to presenting the image after the draw call. Is there a way to delay presentation of an image? Or merge different images together before presenting? My code can be found here, I'm hoping it is enough for someone to get an idea of what I'm trying to do: https://gitlab.com/cwink/Ingin/blob/master/ingin.cpp
Thanks!
You call render twice per frame. And render calls vkQueuePresentKHR, so obviously the two renderings of yours alternate.
You can delay presentation simply by delaying vkQueuePresentKHR call. Let's say you want to show each image for ~1 s. You can simply std::this_thread::sleep_for (std::chrono::seconds(1)); after each render call. (Possibly not the bestest way to do it, but just to get the idea where your problem lies.)
vkQueuePresentKHR does not do any kind of "merging" for you. Typically you "merge images" by simply drawing them into the same swapchain VkImage in the first place, and then present it once.
In iOS, I'd like to have a series of items in "space" similar to the way Time Machine works. The "space" would be navigated by a scroll bar like feature on the side of the page. So if the person scrolls up, it would essentially zoom in in the space and objects that were further away will be closer to the reference point. If one zooms out, then those objects will fade into the back and whatever is behind the frame of refrence will come into view. Kind of like this.
I'm open to a variety of solutions. I imagine there's a relatively easy solution within openGL, I just don't know where to begin.
Check out Nick Lockwood's iCarousel on github. It's a very good component. The example code he provides uses a custom carousel style very much like what you describe. You should get there with just a few tweaks.
As you said, in OpenGL(ES) is relatively easy to accomplish what you ask, however it may not be equally easy to explain it to someone that is not confident with OpenGL :)
First of all, I may suggest you to take a look at The Red Book, the reference guide to OpenGL, or at the OpenGL Wiki.
To begin, you may do some practice using GLUT; it will help you taking confidence with OpenGL, providing some high-level API that will let you skip the boring side of setting up an OpenGL context, letting you go directly to the drawing part.
OpenGL ES is a subset of OpenGL, so essentially has the same structure. Once you understood how to use OpenGL shouldn't be so difficult to use OpenGL ES. Of course Apple documentation is a very important resource.
Now that you know a lot of stuff about OpenGL you should be able to easily understand how your program should be structured.
You may, for example, keep your view point fixed and translate the world (or viceversa). There is not (of course) a universal solution, especially because the only thing that matters is the final result.
Another solution (maybe equally good, it depends on your needs), may be to simply scale up and down images (representing the objects of your world) to simulate the movement through the object itself.
For example you may use an array to store all of your images and use a slider to set (increase/decrease) the dimension of your image. Once the image becomes too large for the display you may gradually decrease alpha, so that the image behind will slowly appear. Take a look at UIImageView reference, it contains all the API's you need for it.
This may lead you to the loss of 3-dimensionality, but it's probably a simpler/faster solution than learn OpenGL.
I'm programming a nice little game that uses shader generated simplex noise for displaying on the fly computed random terrain.
I'm using Objective-C and Xcode 4 and I have gotten everything to run nicely using a subclass of NSOpenGLView. The subclass first compiles the shader and then renders a quad with the noise texture. The program has no problems running this at an acceptable speed (60Hz).
The subclass of NSOpenGLView uses an NSRunLoop to to fire a selector which in turn calls the drawRect:(NSRect)dirtyRect. This is done every frame.
Now, I want the shader to use a uniform that is updated each frame.
The shader should be able to react to a variable change that might occur every frame thus I'm trying to update the uniform at this frequency. The update of the uniform is done in the drawRect:(NSRect)dirtyRect function.
I am partially successful. The screen updates exactly as I'd like for the first 30 frames, then it stops updating the uniform even though I have a call to glUniform1f() and NSLog right next to each other and the NSLog always fires..!
The strange part is that if a hold space pressed (or any other key for that matter) the uniform is updated as it should be.
Clearly I am missing something here in regards to how OSX or OpenGL or something else handles uniforms.
An explanation of what might be ailing me would be appreciated but a pointer to where I can find information about this will suffice.
Update: After fiddling with glGetError() and glGetUniform*() I've noticed that the program works as intended when left alone. However, when I use the trackpad for input the uniform is reset to 0.000 while the rest of the program shows no errors.
First, have you tried calling glGetError() right after glUniform1f() to see what comes out?
I have a very limited knowledge of Mac OS programming (I did some iOS programming two years ago and forgot most of it since then) so the following is a wild guess.
Are you sure drawRect:(NSRect)dirtyRect is called by the same thread as the thread that owns the OpenGL context? As far as I know, OpenGL is not thread safe, so it could be that your glUniform1f() attempts are called from a different thread thus being unable to do anything.
I have some quads that have a texture with transparency and some objects behind these quads. However, these don't seem to be shown. I know it's something about GL_BLEND but I can't manage to make the objects behind show.
I've tried with:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
but still not working. What I basically have is:
// I paint the object
draw_ac3d_file([actualObject getCurrentObject3d]);
// I paint the quad
paintQuadWithAlphaTexture();
There are two common scenarios that create this situation, and it is difficult to tell which one your program is doing, if either at all.
Draw Order
First, make sure you are drawing your objects in the correct order. You must draw from back-to-front or else the models will not be blended properly.
http://www.opengl.org/wiki/Transparency_Sorting
note as Arne Bergene Fossaa pointed out, front-to-back is the proper way to render objects that are not transparent from a performance stand point. Because of this, most renderers first draw all the models that have no transparency front-to-back, and then they go back and render all models that have transparency back-to-front. This is covered in most 3D-graphic texts out there.
back-to-front
front-to-back
image credit to Geoff Leach at RMIT University
Lighting
The second most common issue is improper use of lighting. Normally in this case if you were using the fixed-function pipeline, people would advise you to simply call glDisable(GL_LIGHTING);
Now this should work (if it is the cause at all) but what if you want lighting? Then you would either have to employ custom shaders or set up proper material settings for the models.
A discussion of using the material properties can be found at http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=285889
I vaguely remember seeing something in OpenGL (not ES, which was still at v1.0 on the iPhone when I came across this, which is why I never used it) that let me specify which edges of my polygons were considered outlines vs those that made up the interior of faces. As such, this isn't the same as the outline of the entire model (which I know how to do), but rather the outline of a planar face with all its tris basically blended into one poly. For instance, in a cube made up of tri's, each face is actually two tris. I want to render the outline of the square, but not the diagonal across the face. Same thing with a hexagon. That takes four tris, but just one outline for the face.
Now yes, I know I can simply test all the edges to see if they share coplanar faces, but I could have sworn I remember seeing somewhere when you're defining the tri mesh data where you could say 'this line outlines a face whereas this one is inside a face.' That way when rendering, you could set a flag that basically says 'Give me a wireframe, but only the wires around the edges of complete faces, not around the tris that make them up.'
BTW, my target is all platforms that support OpenGL ES 2.0 but my dev platform is iOS. Again, this Im pretty sure was originally in OpenGL and may have been depreciated once shaders came on the scene, but I can't even find a reference to this feature to check if that's the case.
The only way I know now is to have one set of vertices, but two separate sets of indices... one for rendering tris, and another for rendering the wireframes of the faces. It's a real pain since I end up hand-coding a lot of this, which again, I'm 99% sure you can define when rendering the lines.
GL_QUADS, glEdgeFlag and glPolygonMode are not supported in OpenGL ES.
You could use LINES to draw the wireframe: To get hidden lines, first draw black filled triangles (with DEPTH on) and then draw the edges you are interested in with GL_LINES.