Disable mipmapping in OpenGL ES 2.0 - opengl-es-2.0

I would like to draw some of the same figures (with the same texture) on screen (OpenGL ES 2.0). These figures will be different in magnification and minification filters. And different states mipmapping.
The issue is: if I use mipmapping in draw any figure ( if I called glGenerateMipmap() function) I can't switch off mipmapping mode.
Is it possible to switch off mipmapping mode, if I call glGenerateMipmap() at least once?

glGenerateMipmap only generates the smaller mipmap images (based on the top-level image). But those mipmaps are not used for filtering if you don't use a proper mipmapping filter mode (through glTexParamteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_..._MIPMAP_...)). So if you don't want your texture mipmap filtered, just disable it for this particular texture by setting either GL_NEAREST or GL_LINEAR as minification filter. Likewise does not calling glGenerateMipmap not mean that there is no mipmapping going on. A possible mipmapping filter mode (which is also the default for a newly created texture) will still be used, just that the mipmap images contain rubbish (or the texture is actually incomplete, resulting in implementation-defined behaviour, but usually a black texture).
Likewise you shouldn't call glGenerateMipmap each frame before rendering. Call it once after setting the base image of the texture. Like said it generates the mipmap images, those won't go away after they've been generated. What decides if mipmapping is actually used is the texture object's filter mode.

Related

How can I overlay my UI render target onto the back buffer using DirectX 11?

I have two render targets, the back buffer and a UI render target where all 2d UI will be drawn.
I have used the graphics debugger to confirm that both render targets are being written to with the correct data, but I'm having trouble combining the two right at the end.
Question:
My world objects are drawn directly to the backbuffer so there is no problem displaying these, but how do I now overlay the UI render target OVER the backbuffer?
Desired effect:
Back buffer render target
UI render target
There's several ways to do this. The easiest is to render your UI elements to a texture that has both a RenderTargetView and a ShaderResourceView, then render the whole texture to the back buffer as a single quad in orthographic projection space. This effectively draws a 2D square containing your UI in screen space on the back buffer. It also has the benefit of allowing transparency.
You could also use the OutputMerger stage to blend the UI render target with the back buffer during rendering of the world geometry. You'd need to be careful how you set up your blend operations, as it could result in items being drawn over the UI, or blending inappropriately.
If your UI is not transparent, you could do the UI rendering first and mark the area under the UI in the stencil buffer, then do your world rendering while the stencil test is enabled. This would cause the GPU to ignore any pixels underneath the UI, and not send them to the pixel shader.
The above could also be modified to write the minimum depth value to the pixels within the UI render target, ensuring all geometry underneath it would fail the depth test. This modification would free up the stencil buffer for mirrors/shadows/etc.
The above all work for flat UIs drawn over the existing 3D world. To actually draw more complex UIs that appear to be a part of the world, you'll need to actually render the elements to 3D objects in the world space, or do complex projection operations to make it seem like they are.

Guarantee no anti-aliasing rendering to integer texture

For a WebGL 2 canvas, I need a simple 'picking' system, i.e. given a point p in 2D, the system can tell which object (if any) was rendered to p. (I don't need the pick results in the CPU, only in a shader.)
To implement this, each object will be rendered with a different 'color id' to a framebuffer dedicated to picking. I am thinking of using an R16UI or R32UI texture format, and GL_NEAREST filtering. My concern is anti-aliasing: how do I guarantee that the edges of the objects won't get anti-aliased, thus changing the output values, and corrupting the pick system?
I am looking for both the code to disable anti-aliasing, and explanations on why this is/isn't guaranteed, from those who know the standards.
WebGL (and OpenGL ES) don't antialias framebuffers in any automatic way. Antialiasing of framebuffers is a manual operation. In WebGL1 you can't antialias a framebuffer at all. In WebGL2 you'd create a multisample renderbuffer. So basically if you don't create a multisample render buffer you'll get no antialiasing.
Also Integer and Unsigned integer texture are not filterable which means they only support gl.NEAREST
So there's nothing to show you. If you use an R16UI or R32UI texture and render to it it will just work as you were hoping.

Can animated GIFs do palette shifting?

Old-school computer graphics sometimes produced animations (cycles and fades) without actually redrawing anything to video memory, purely by updating the color palette.
Is it possible to do this in an animated gif? That is, optimise (reduce file-size of) the gif by only providing a single frame of (significant) raster content, but have each (delayed) animation frame update colour values in the (global) palette?
The short answer is no.
According to the existing standard, every GIF frame containing a local palette must have its own data to be displayed using that palette, otherwise the local palette is of no use.
One of the possible solutions is to define your own GIF Application Extension block (like Netscape did; see the link) to store additional palettes and their time delays. Apparently, those extension blocks should appear after frames whose data they affect.
The downside of this approach is that no one except your decoder would support palette cycling unless your block type somehow makes its way to become a new de-facto standard.
Nevertheless, your handcrafted GIFs would remain valid for all other GIF decoders (even though without any palette cycling), as the standard requires them to silently ignore any GIF Application Extensions with IDs unknown to them.

Cocos3D - background shown through meshes

I imported the .pod file created from Blender and the blue background is shown through the eyelash and eyebrow meshes. Does anyone know why I'm encountering this?
WITHOUT additional material (looking normal except the root of the hair).
WITH new green material added to her left shoulder, the eyebrow and eyelash began showing the background
This issue is caused by the order in which the nodes are being rendered in your scene.
In the first model, the hair is drawn first, then the skin, then the eyebrows and eyelashes. In the second model, the hair, eyebrows and eyelashes are all drawn before the skin. By the time the skin under the hair or eyelashes is drawn, the depth buffer indicates that something closer to the camera has already been drawn, and the engine doesn't bother rendering those skin pixels. But because the eyelashes, eyebrows and hair all contain transparency, we end up looking right through them onto the backdrop.
This design use of a depth buffer is key to all 3D rendering. It's how the engine knows not to render pixels that are being visually occluded by another object, otherwise all we'd ever see was the last object to be rendered.However, when rendering overlapping objects that contain transparency, it's important to get the rendering order correct, so that more distant objects that are behind closer transparent objects are rendered first.
In Cocos3D, there are several tool available for you to order your transparent objects for rendering:
The first, and primary tool, is the drawingSequencer that is managed by the CC3Scene. You can configure several different types of drawing sequencers. The default sequencer is smart enough to render all opaque objects first, then to render the objects that contain transparency in decreasing order of distance from the camera (rendering farther objects first). This works best for most scenes, and in particular where objects are moving around and can move in front of each other unpredictably. Unfortunately, in your custom CC3Scene initialization code (which you sent me per the question comments), you replaced the default drawing sequencer with one that does not sequence transparent objects based on distance. If you remove that change, everything works properly.
Objects that are not explicitly sequenced by distance (as in part 1 above) are rendered in the order in which they are added to the scene. You can therefore also define rendering order by ensuring that the objects are added to your scene in the order in which you want them rendered. This can work well for static models, such as your first character (if you change it to add the hair after the skin).
CC3Node also has a zOrder property, which allows you to override the rendering order explicitly, so that objects with larger zOrder value are rendered before those with smaller zOrder values. This is useful when you have a static model whose components cannot be added in rendering order, or to temporarily override the rendering order of two transparent objects that might be passing in front of each other. Using the zOrder property does depend on using a drawingSequencer that makes use of it (the default drawing sequencer does).
Finally, you can temporarily turn off depth testing or masking when rendering particular nodes, by setting the shouldDisableDepthTest and shouldDisableDepthMask properties to YES on those nodes.

How do I rotate an OpenGL view relative to the center of the view as opposed to the center of the object being displayed?

I'm working on a fork of Pleasant3D.
When rotating an object being displayed the object always rotates around the same point relative to to itself even if that point is not at the center of the view (e.g. because the user has panned to move the object in the view).
I would like to change this so that the view always rotates the object around the point at the center of the view as it appears to the user instead of the center of the object.
Here is the core of the current code that rotates the object around its center (slightly simplified) (from here):
glLoadIdentity();
// midPlatform is the offset to reach the "middle" of the object (or more specifically the platform on which the object sits) in the x/y dimension.
// This the point around which the view is currently rotated.
Vector3 *midPlatform = [self.currentMachine calcMidBuildPlatform];
glTranslatef((GLfloat)cameraTranslateX - midPlatform.x,
(GLfloat)cameraTranslateY - midPlatform.y,
(GLfloat)cameraOffset);
// trackBallRotation and worldRotation come from trackball.h/c which appears to be
// from an Apple OpenGL sample.
if (trackBallRotation[0] != 0.0f) {
glRotatef (trackBallRotation[0], trackBallRotation[1], trackBallRotation[2], trackBallRotation[3]);
}
// accumlated world rotation via trackball
glRotatef (worldRotation[0], worldRotation[1], worldRotation[2], worldRotation[3]);
glTranslatef(midPlatform.x, midPlatform.y, 0.);
// Now draw object...
What transformations do I need to apply in what order to get the effect I desire?
Some of what I've tried so far
As I understand it this is what the current code does:
"OpenGL performs matrices multiplications in reverse order if multiple transforms are applied to a vertex" (from here). This means that the first transformation to be applied is actually the last one in the code above. It moves the center of the view (0,0) to the center of the object.
This point is then used as the center of rotation for the next two transformations (the rotations).
Finally the midPlatform translation is done in reverse to move the center back to the original location and the XY translations (panning) done by the user is applied. Here also the "camera" is moved away from the object to the proper location (indicated by cameraOffset).
This seems straightforward enough. So what I need to change is instead of translating the center of the view to the center of the object (midPlatform) I need to translate it to the current center of the view as seen by the user, right?
Unfortunately this is where the transformations start affecting each other in interesting ways and I am running into trouble.
I tried changing the code to this:
glLoadIdentity();
glTranslatef(0,
0,
(GLfloat)cameraOffset);
if (trackBallRotation[0] != 0.0f) {
glRotatef (trackBallRotation[0], trackBallRotation[1], trackBallRotation[2], trackBallRotation[3]);
}
// accumlated world rotation via trackball
glRotatef (worldRotation[0], worldRotation[1], worldRotation[2], worldRotation[3]);
glTranslatef(cameraTranslateX, cameraTranslateY, 0.);
In other words, I translate the center of the view to the previous center, rotate around that, and then apply the camera offset to move the camera away to the proper position. This makes the rotation behave exactly the way I want it to, but it introduces a new issue. Now any panning done by the user is relative to the object. For example if the object is rotated so that the camera is looking along the X axis end-on, if the user pans left to right the object appears to be moving closer/further from the user instead of left or right.
I think I can understand why the is (XY camera translations being applied before rotation), and I think what I need to do is figure out a way to cancel out the translation from before the rotation after the rotation (to avoid the weird panning effect) and then to do another translation which translates relative to the viewer (eye coordinate space) instead of the object (object coordinate space) but I'm not sure exactly how to do this.
I found what I think are some clues in the OpenGL FAQ(http://www.opengl.org/resources/faq/technical/transformations.htm), for example:
9.070 How do I transform my objects around a fixed coordinate system rather than the object's local coordinate system?
If you rotate an object around its Y-axis, you'll find that the X- and Z-axes rotate with the object. A subsequent rotation around one of these axes rotates around the newly transformed axis and not the original axis. It's often desirable to perform transformations in a fixed coordinate system rather than the object’s local coordinate system.
The root cause of the problem is that OpenGL matrix operations postmultiply onto the matrix stack, thus causing transformations to occur in object space. To affect screen space transformations, you need to premultiply. OpenGL doesn't provide a mode switch for the order of matrix multiplication, so you need to premultiply by hand. An application might implement this by retrieving the current matrix after each frame. The application multiplies new transformations for the next frame on top of an identity matrix and multiplies the accumulated current transformations (from the last frame) onto those transformations using glMultMatrix().
You need to be aware that retrieving the ModelView matrix once per frame might have a detrimental impact on your application’s performance. However, you need to benchmark this operation, because the performance will vary from one implementation to the next.
And
9.120 How do I find the coordinates of a vertex transformed only by the ModelView matrix?
It's often useful to obtain the eye coordinate space value of a vertex (i.e., the object space vertex transformed by the ModelView matrix). You can obtain this by retrieving the current ModelView matrix and performing simple vector / matrix multiplication.
But I'm not sure how to apply these in my situation.
You need to transform/translate "center of view" point into origin, rotate, then invert that translation, back to the object's transform. This is known as a basis change in linear algebra.
This is way easier to work with if you have a proper 3d-math library (I'm assuming you do have one), and that also helps to to stay far from the deprecated fixed-pipeline APIs. (more on that later).
Here's how I'd do it:
Find the transform for the center of view point in world coordinates (figure it out, then draw it to make sure it's correct, with x,y,z axis too, since the axii are supposed to be correct w.r.t. the view). If you use the center-of-view point and the rotation (usually the inverse of the camera's rotation), this will be a transform from world origin to the view center. Store this in a 4x4 matrix transform.
Apply the inverse of the above transform, so that it becomes the origin. glMultMatrixfv(center_of_view_tf.inverse());
Rotate about this point however you want (glRotate())
Transform everything back to world space (glMultMatrixfv(center_of_view_tf);)
Apply object's own world transform (glTranslate/glRotate or glMultMatrix) and draw it.
About the fixed function pipeline
Back in the old days, there were separate transistors for transforming a vertex (or it's texture coordinates), computing where light was in relation to it applying lights (up to 8) and texturing fragments in many different ways. Simply, glEnable(), enabled fixed blocks of silicon to do some computation in the hardware graphics pipeline. As performance grew, die sized shrunk and people demanded more features, the amount of dedicated silicon grew too, and much of it wasn't used.
Eventually, it got so advanced that you could program it in rather obscene ways (register combiners anyone). And then, it became feasible to actually upload a small assembler program for all vertex-level transforms. Then, it made to sense to keep a lot of silicon there that just did one thing (especially as you could've used those transistors to make the programmable stuff faster), so everything became programmable. If "fixed function" rendering was called for, the driver just converted the state (X lights, texture projections, etc) to shader code and uploaded that as a vertex shader.
So, currently, where even the fragment processing is programmable, there is just a lot of fixed-function options that is used by tons and tons of OpenGL applications, but the silicon on the GPU just runs shaders (and lots of it, in parallell).
...
To make OpenGL more efficient, and the drivers less bulky, and the hardware simpler and useable on mobile/console devices and to take full advantage of the programmable hardware that OpenGL runs on these days, many functions in the API are now marked deprecated. They are not available on OpenGL ES 2.0 and beyond (mobile) and you won't be getting the best performance out of them even on desktop systems (where they will still be in the driver for ages to come, serving equally ancient code bases originating back to the dawn of accelerated 3d graphics)
The fixed-functionness mostly concerns how transforms/lighting/texturing etc. are done by "default" in OpenGL (i.e. glEnable(GL_LIGHTING)), instead of you specifying these ops in your custom shaders.
In the new, programmable, OpenGL, transform matrices are just uniforms in the shader. Any rotate/translate/mult/inverse (like the above) should be done by client code (your code) before being uploaded to OpenGL. (Using only glLoadMatrix is one way to start thinking about it, but instead of using gl_ModelViewProjectionMatrix and the ilk in your shader, use your own uniforms.)
It's a bit of a bother, since you have to implement quite a bit of what was done by the GL driver before, but if you have your own object list/graph with transforms and a transform somewhere etc, it's not that much work. (OTOH, if you have a lot of glTranslate/glRotate in your code, it might be...). As I said, a good 3d-math library is indispensable here.
-..
So, to change the above code to "programmable pipeline" style, you'd just do all these matrix multiplications in your own code (instead of the GL driver doing it, still on the CPU) and then send the resulting matrix to opengl as a uniform before you activate the shaders and draw your object from VBOs.
(Note that modern cards do not have fixed-function code, just a lot of code in the driver to compile fixed-function rendering state to a shader that does the job. No wonder "classic" GL drivers are huge...)
...
Some info about this process is available at Tom's Hardware Guide and probably Google too.