OpenGL ES 2.0 GL_DEPTH_COMPONENT and glTexImage2D - opengl-es-2.0

The Chapter 12 of OpenGL ES 2.0 programming guide book has an example, which called Example 12-2 Render to Depth Texture.
This example calls glTexImage2D API, and the internalformat parameter is GL_DEPTH_COMPONENT.
But using GL_DEPTH_COMPONENT is not allowed according to https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glTexImage2D.xml.
So I have 2 questions about OpenGL ES 2.0.
If the example is not proper, how to render to depth texture? If the example is proper, why not match the description of www.khronos.org?
Which API will use GL_DEPTH_COMPONENT enumeration?

Which API will use GL_DEPTH_COMPONENT enumeration?
glRenderbufferStorage uses GL_DEPTH_COMPONENT, specifically GL_DEPTH_COMPONENT16
glGenRenderbuffers(1, (GLuint*)&_nRenderTargetRboDepthId);
glBindRenderbuffer(GL_RENDERBUFFER, _nRenderTargetRboDepthId);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, nTexWidth, nTexHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _nRenderTargetRboDepthId);
GLenum err = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(err != GL_FRAMEBUFFER_COMPLETE) { // error!
// format harddrive
}

Related

What are the normal methods for achiving texture mapping with raytracing?

When you create a BLAS (bottom level acceleration structures) you specify any number of vertex/index buffers to be part of the structure. How does that end up interacting with the shader and get specified in the descriptor set? How should I link these structures with materials?
How is texture mapping usually done with raytracing? I saw some sort of "materials table" in Q2RTX but the documentation is non-existent and the code is sparsely commented.
A common approach is to use a material buffer in combination with a texture array that is addressed in the shaders where you require the texture data. You then pass the material id e.g. per-vertex or per-primitive and then use that to dynamically fetch the material, and with it the texture index. Due to the requirements for Vulkan ray tracing you can simplify this by using the VK_EXT_descriptor_indexing extension (Spec) that makes it possible to create a large and descriptor set containing all textures required to render your scene.
The relevant shader parts:
// Enable required extension
...
#extension GL_EXT_nonuniform_qualifier : enable
// Material definition
struct Material {
int albedoTextureIndex;
int normalTextureIndex;
...
};
// Bindings
layout(binding = 6, set = 0) readonly buffer Materials { Material materials[]; };
layout(binding = 7, set = 0) uniform sampler2D[] textures;
...
// Usage
void main()
{
Primitive primitive = unpackTriangle(gl_Primitive, ...);
Material material = materials[primitive.materialId];
vec4 color = texture(textures[nonuniformEXT(material.albedoTextureIndex)], uv);
...
}
In your application you then create a buffer that stores the materials generated on the host, and bind it to the binding point of the shader.
For the textures, you pass them as an array of textures. An array texture would be an option too, but isn't as flexible due to the same size per array slice limitation. Note that it does not have a size limitation in the above example, which is made possible by VK_EXT_descriptor_indexing and is only allowed for the final binding in a descriptor set. This adds some flexibility to your setup.
As for the passing the material index that you fetch the data from: The easiest approach is to pass that information along with your vertex data, which you'll have to access/unpack in your shaders anyway:
struct Vertex {
vec4 pos;
vec4 normal;
vec2 uv;
vec4 color;
int32_t materialIndex;
}

How to correctly use ImageReader with YUV_420_888 and MediaCodec to encode video to h264 format?

I'm implementing a camera application on Android devices. Currently, I use Camera2 API and ImageReader to get image data in YUV_420_888 format, but I don't know how to exactly write these data to MediaCodec.
Here are my questions:
What is YUV_420_888?
The format YUV_420_888 is ambiguous because it can be any format which belongs to the YUV420 family, such as YUV420P, YUV420PP, YUV420SP and YUV420PSP, right?
By accessing the image's three planes(#0, #1, #2), I can get the Y(#0), U(#1), V(#2) values of this image. But the arrangement of these values may not be the same on different devices. For example, if YUV_420_888 truly means YUV420P, the size of both Plane#1 and Plane#2 is a quarter of the size of Plane#0. If YUV_420_888 truly means YUV420SP, the size of both Plane#1 and Plane#2 is half of the size of Plane#0(Each of Plane#1 and Plane#2 contains U, V values).
If I want to write these data from image's three planes to MediaCodec, what kind of format I need to convert to? YUV420, NV21, NV12, ...?
What is COLOR_FormatYUV420Flexible?
The format COLOR_FormatYUV420Flexible is also ambiguous because it can be any format which belongs to the YUV420 family, right? If I set KEY_COLOR_FORMAT option of a MediaCodec object to COLOR_FormatYUV420Flexible, what format(YUV420P, YUV420SP...?) of data should I input to the MediaCodec object?
How about using COLOR_FormatSurface?
I know MediaCodec has its own surface, which can be used if I set KEY_COLOR_FORMAT option of a MediaCodec object to COLOR_FormatSurface. And with Camera2 API, I don't need to write any data by myself to the MediaCodec object. I can just drain the output buffer.
However, I need to change the image from the camera. For example, draw other pictures, write some text on it, or insert another video as POP(Picture of Picture).
Can I use ImageReader to read the image from Camera, and after re-drawing that, write the new data to MediaCodec's surface, and then drain it out? How to do that?
EDIT1
I implemented the function by using COLOR_FormatSurface and RenderScript. Here is my code:
onImageAvailable method:
public void onImageAvailable(ImageReader imageReader) {
try {
try (Image image = imageReader.acquireLatestImage()) {
if (image == null) {
return;
}
Image.Plane[] planes = image.getPlanes();
if (planes.length >= 3) {
ByteBuffer bufferY = planes[0].getBuffer();
ByteBuffer bufferU = planes[1].getBuffer();
ByteBuffer bufferV = planes[2].getBuffer();
int lengthY = bufferY.remaining();
int lengthU = bufferU.remaining();
int lengthV = bufferV.remaining();
byte[] dataYUV = new byte[lengthY + lengthU + lengthV];
bufferY.get(dataYUV, 0, lengthY);
bufferU.get(dataYUV, lengthY, lengthU);
bufferV.get(dataYUV, lengthY + lengthU, lengthV);
imageYUV = dataYUV;
}
}
} catch (final Exception ex) {
}
}
Convert YUV_420_888 to RGB:
public static Bitmap YUV_420_888_toRGBIntrinsics(Context context, int width, int height, byte[] yuv) {
RenderScript rs = RenderScript.create(context);
ScriptIntrinsicYuvToRGB yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
Type.Builder yuvType = new Type.Builder(rs, Element.U8(rs)).setX(yuv.length);
Allocation in = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
Type.Builder rgbaType = new Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height);
Allocation out = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
Bitmap bmpOut = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
in.copyFromUnchecked(yuv);
yuvToRgbIntrinsic.setInput(in);
yuvToRgbIntrinsic.forEach(out);
out.copyTo(bmpOut);
return bmpOut;
}
MediaCodec:
mediaFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, MediaCodecInfo.CodecCapabilities.COLOR_FormatSurface);
...
mediaCodec.configure(mediaFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
...
surface = mediaCodec.createInputSurface(); // This surface is not used in Camera APIv2. Camera APIv2 uses ImageReader's surface.
And in athother thread:
while (!stop) {
final byte[] image = imageYUV;
// Do some yuv computation
Bitmap bitmap = YUV_420_888_toRGBIntrinsics(getApplicationContext(), width, height, image);
Canvas canvas = surface.lockHardwareCanvas();
canvas.drawBitmap(bitmap, matrix, paint);
surface.unlockCanvasAndPost(canvas);
}
This way works, but the performance is not good. It can't output 30fps video files(only ~12fps). Perhaps I should not use COLOR_FormatSurface and the surface's canvas for encoding. The computed YUV data should be written to the mediaCodec directly without any surface doing any conversion. But I still don't know how to do that.
You are right, YUV_420_888 is a format that can wrap different YUV 420 formats. The spec carefully explains that the arrangement of U and V planes is not prescribed, but there are certain restrictions; e.g. if the U plane has pixel stride 2, same applies to V (and then the underlying byte buffer can be NV21).
COLOR_FormatYUV420Flexible is a synonym of YUV_420_888, but they belong to different classes: MediaCodec and ImageFormat, respectively.
The spec explains:
All video codecs support flexible YUV 4:2:0 buffers since LOLLIPOP_MR1.
COLOR_FormatSurface is an opaque format that can deliver best performance for MediaCodec, but this comes at price: you cannot directly read or manipulate its content. If you need to manipulate the data that goes to the MediaCodec, then using ImageReader is an option; whether it will be more efficient than ByteBuffer, depends on what you do and how you do it. Note that for API 24+ you can work with both camera2 and MediaCodec in C++.
The invaluable resource of details for MediaCodec is http://www.bigflake.com/mediacodec. It references a full example of 264 encoding.
Create a textureID -> SurfaceTexture -> Surface -> Camera 2 -> onFrameAvaliable -> updateTexImage -> glBindTexture -> draw something -> swapbuffer to Mediacodec's inputSurface.

OpenGL 4.5 - Shader storage: write in vertex shader, read in fragment shader

Both my fragment and vertex shaders contain the following two guys:
struct Light {
mat4 view;
mat4 proj;
vec4 fragPos;
};
layout (std430, binding = 0) buffer Lights {
Light lights[];
};
My problem is that that last field, fragPos, is computed by the vertex shader like this, but the fragment shader does not see the changes made by the vertex shader in fragPos (or any changes at all):
aLight.fragPos = bias * aLight.proj * aLight.view * vec4(vs_frag_pos, 1.0);
... where aLight is lights[i] in a loop. As you can imagine I'm computing the position of the vertex in the coordinate systems of each light present to be used in shadow mapping. Any idea what's wrong here? Am I doing a fundamentally wrong thing?
Here is how I initialize my storage:
struct LightData {
glm::mat4 view;
glm::mat4 proj;
glm::vec4 fragPos;
};
glGenBuffers(1, &BBO);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, BBO);
glNamedBufferStorage(BBO, lights.size() * sizeof(LightData), NULL, GL_DYNAMIC_STORAGE_BIT);
// lights is a vector of a wrapper class for LightData.
for (unsigned int i = 0; i < lights.size(); i++) {
glNamedBufferSubData(BBO, i * sizeof(LightData), sizeof(LightData), &(lights[i]->data));
}
It may be worth noting that if I move fragPos to a fixed-size array out variable in the vertex shader out fragPos[2], leave the results there and then add the fragment shader counterpart in fragPos[2] and use that for the rest of my stuff then things are OK. So what I want to know more about here is why my fragment shader does not see the numbers crunched down by the vertex shader.
I will not be very accurate, but I will try to explain you why your fragment shader does not see what your vertex shader write :
When your vertex shader write some informations inside your buffer, the value you write are not mandatory to be wrote inside video memory, but can be stored in a kind of cache. The same idea occur when your fragment shader will read your buffer, it may read value in a cache (that is not the same as the vertex shader).
To avoid this problem, you must do two things, first, you have to declare your buffer as coherent (inside the glsl) : layout(std430) coherent buffer ...
Once you have that, after your writes, you must issue a barrier (globally, it says : be careful, I write value inside the buffer, values that you will read may be invalid, please, take the new values I wrote).
How to do such a thing ?
Using the function memoryBarrierBuffer after your writes. https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/memoryBarrierBuffer.xhtml
BTW : don't forget to divide by w after your projection.

'Double' missing from VertexAttribPointerType enum in OpenTK 1.0?

I'm trying to specify the type of my GL.VertexAttribPointer(...) argument as GL_DOUBLE. This should be valid according to the documentation for this OpenTK function for ES20 (link).
However, the VertexAttribPointerType enum seems to be missing the Double type for OpenTK-1.0. In other words, the following line:
GL.VertexAttribPointer(ATTRIBUTE_COORD2D, 3, VertexAttribPointerType.Double, false, 0, quadVertices);
..fails to compile since the VertexAttribPointerType only provides the definitions for the following:
using System;
namespace OpenTK.Graphics.ES20
{
public enum VertexAttribPointerType
{
Byte = 5120,
UnsignedByte,
Short,
UnsignedShort,
Float = 5126,
Fixed = 5132
}
}
Is there a work around for this issue? How else are you supposed to specify a double[] of vertices for the vertex shader?
The OpenGL ES 2.0 manual page for glVertexAttribPointer says:
GL_BYTE, GL_UNSIGNED_BYTE, GL_SHORT, GL_UNSIGNED_SHORT, GL_FIXED, or
GL_FLOAT are accepted
So the reason for OpenTK not having double is that the underlying framework doesn't seem to support it either. The OpenTK documentation may be suffering from copy-paste error.

A struct containing entire OpenGL ES 2.0 state

Has anyone got a C struct that has members describing the entire OpenGL ES 2.0 state? It would look something like that:
struct OpenGLES20State
{
int activeTexture;
bool scissorEnabled;
Rectangle scissorRectangle;
bool stencilEnabled;
int stencilFunc;
int stencilOpFail;
int stencilOpDFail;
int stencilOpDPass;
//
// and a lot more...
//
}
You can create this structure by carefully working with OpenGL ES 2 specification available from Khronos: http://www.khronos.org/opengles/2_X/
Also, interesting place to look for a similar implementation may be Mesa3D library source code.