2 Shaders using the same vertex data - objective-c

So im having problems rendering using 2 different shaders. Im currently rendering shapes that represent dice, what i want is if the dice is selected by the user, it draws an outline by drawing the dice completely red and slightly scaled up, then render the proper dice over it. At the moment some of the dice, for some reason, render the wrong dice for the outline, but the right one for the proper foreground dice.
Im wondering if they aren't getting their vertex data mixed up somehow. Im not sure if doing something like this is even allowed in openGL:
glGenBuffers(1, &_vertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _vertexBuffer);
glBufferData(GL_ARRAY_BUFFER, numVertices*sizeof(GLfloat), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(effect->vertCoord);
glVertexAttribPointer(effect->vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(effect->toon_vertCoord);
glVertexAttribPointer(effect->toon_vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0);
im trying to bind the vertex data to 2 different shaders here
when i load my first shader i have:
vertCoord = glGetAttribLocation(TexAndLighting, "position");
and the other shader has:
toon_vertCoord = glGetAttribLocation(Toon, "position");
if I use the shaders independently of each other they work fine, but when i try to render both one on top of the other they get the model mixed up some times. here is how my draw function looks:
- (void) draw {
[EAGLContext setCurrentContext:context];
glBindVertexArrayOES(_vertexArray);
effect->modelViewMatrix = mvm;
effect->numberColour = GLKVector4Make(numbers[colorSelected].r, numbers[colorSelected].g, numbers[colorSelected].b, 1);
effect->faceColour = GLKVector4Make(faceColors[colorSelected].r, faceColors[colorSelected].g, faceColors[colorSelected].b, 1);
if(selected){
[effect drawOutline]; //this function prepares the shader
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0);
}
[effect prepareToDraw]; //same with this one
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, 0);
}
this is what it looks like, as you can see most of the outlines are using the wrong dice, or none at all:
links to full code:
http://pastebin.com/yDKb3wrD Dice.mm //rendering stuff
http://pastebin.com/eBK0pzrK Effects.mm //shader stuff
http://pastebin.com/5LtDAk8J //my shaders, shouldn't be anything to do with them though
TL;DR: trying to use 2 different shaders that use the same vertex data, but its getting the models mixed up when rendering using both at the same time, well thats what i think is going wrong, quite stumped actually.

You're correct, in that this is not allowed (or rather it doesn't do what you think):
glEnableVertexAttribArray(effect->vertCoord);
glVertexAttribPointer(effect->vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(effect->toon_vertCoord);
glVertexAttribPointer(effect->toon_vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0);
Attributes have no particular linkage to shaders. You can't tell OpenGL "attribute N is for this shader, and attribute M is for this other shader."
Attributes simply bind to indexes, and any shader that happens to have an input at that index will slurp the data that was last bound to that particular index.
So if you have two shaders, lets call them "toon" and "normal", which have the following inputs (this is hypothetical):
normal
input vertCoord (index = 0)
input texCoord (index = 1)
toon
input toon_vertCoord (index = 0)
input toon_somethingElse (index = 1)
Then when you run this code:
glEnableVertexAttribArray(effect->vertCoord);
glVertexAttribPointer(effect->vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(effect->toon_vertCoord);
glVertexAttribPointer(effect->toon_vertCoord, 3, GL_FLOAT, GL_FALSE, 0, 0);
draw_normal_object();
Your effect->vertCoord is no longer bound to anything, because toon_vertCoord has the same index, and you've overwritten the input pointer. So here your normal shader will be sampling from the toon_vertCoord, and everything will be all messed up.
What you want to do is to enable/pointer all of the attributes for "normal shader", draw the normal objects, then switch to the toon shader, enable/pointer all of the toon attributes, and then draw the toon objects.

Related

Using metal to snapshot SCNRenderer produces darker output image

I'm using a Metal render pass to snapshot my SceneKit scene attached to a SCNRenderer. The method is faster than using the UIImage-producing SCNRenderer.snapshot(), but the output of the two methods is different; my method produces a darker image. I thought this could be to do with either a color-space difference, or alpha issue.
The image on the right shows my custom method, in which the color doesn't look right.
The color space seems to be the same in the UIImage produced by both the standard method's result, and my own (kCGColorSpaceModelRGB; sRGB IEC61966-2.1), so I don't think this is the issue.
I'll share elements of the custom render code that I believe are relevant.
I configure the MTLRenderPassDescriptor as follows:
renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadAction.clear
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 0, 0, 0)
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreAction.store
I then create a texture to render into. I create a CGContext with:
bitsPerComponent: 8
bitsPerPixel: 32
colorSpace: CGColorSpaceCreateDeviceRGB()
bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.noneSkipFirst.rawValue
fillColor: UIColor.clear.cgColor
This is an area I'm concerned about. I've tried other color spaces, CGBitmapInfo and CGImageAlphaInfo flags, and other fill colors. The fill color does have an effect on the output, but I do need transparency, so clear does feel correct.
I create a MTLTextureDescriptor.texture2DDescriptor with .rgba8Unorm as the pixel format, with usage MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue).
I then go on to hand my texture to the render pass descriptor and run a render command.
renderPassDescriptor.colorAttachments[0].texture = texture
let commandBuffer = commandQueue.makeCommandBuffer()!
renderer.render(atTime: time, viewport: viewport, commandBuffer: commandBuffer,
passDescriptor: renderPassDescriptor)
commandBuffer.commit()
In my normal pipeline, I go on here to create a CVPixelBuffer, but I introduced the creation of a CGImage to be able to more easily preview the image in the Xcode debugger. I do this using the following:
var data = Array<UInt8>(repeatElement(0, count: 4*mtlTexture.width*mtlTexture.height))
mtlTexture.getBytes(&data, bytesPerRow: 4*mtlTexture.width, from: MTLRegionMake2D(0, 0, mtlTexture.width, mtlTexture.height), mipmapLevel: 0)
let bitmapInfo = CGBitmapInfo(rawValue: (CGBitmapInfo.byteOrder32Big.rawValue | CGImageAlphaInfo.premultipliedLast.rawValue))
let colorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: &data,
width: mtlTexture.width,
height: mtlTexture.height,
bitsPerComponent: 8,
bytesPerRow: 4*mtlTexture.width,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue)
return context?.makeImage()
And this CGImage (or the CVPixelBuffer) is where I first observe the darkened image. So I believe that either the initial Metal render pass is creating the color disparity, or, I'm always performing wrong conversions to each other format I use.
An issue that is perhaps related can be found here:
https://github.com/MetalPetal/MetalPetal/issues/76
That issue seems to be taking place in a render view, and I don't use a SceneView or anything called a renderView. I have a SCNRenderer and I turn snapshots into images to write to video buffers, but the color issue presents itself earlier than those steps. The post does mention that the render view should use the format bgra8Unorm_srgb, so I wonder if that should be introduced in my pipeline, but I just can't work out where it belongs. Changing the pixelFormat from rgba8Unorm to bgra8Unorm_srgb in my MTLTextureDescriptor doesn't seem to make any difference.
Does this effect look familiar to anyone, or can anyone shed light on this?
It should work if you chose CGColorSpaceCreateWithName(kCGColorSpaceSRGB) for the bitmap context and MTLPixelFormatRGBA8Unorm_sRGB for the texture format.

GLSL 3.2 mapping shader arguments

I'm trying to create a high-level Objective-C OpenGL shader wrapper that allows me to execute various GL shaders without a lot of GL code that clutters the application logic.
So something like this for a shader with two 'in' arguments to create a quad with a different color in every corner:
OpenGLShader* theShader = [OpenGLShaderManager shaderWithName:#"MyShader"];
glUseProgram(theShader.program);
float colorsForQuad[4] = {{1.0f, 0.0f, 0.0f, 1.0f}, {0.0f, 1.0 ....}}
theShader.arguments[#"inColor"] setValue:colorsForQuad forNumberOfVertices:4];
float positionsForQuad[4] = {{-1.0f, -1.0f, 0.0f, 1.0f}, {-1.0f, 1.0f, ....}}
theShader.arguments[#"inPosition"] setValue:positionsForQuad forNumberOfVertices:4];
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
The setValue:forNumberOfVertices: function looks like this:
int bytesForGLType = numBytesForGLType(self.openGLValueType);
glBindVertexArray(self.vertexArrayObject);
GetError();
glBindBuffer(GL_ARRAY_BUFFER, self.vertexBufferObject);
GetError();
glBufferData(GL_ARRAY_BUFFER, bytesForGLType * numVertices, value, GL_STATIC_DRAW);
GetError();
glEnableVertexAttribArray((GLuint)self.boundLocation);
GetError();
glVertexAttribPointer((GLuint)self.boundLocation, numVertices,
GL_FLOAT, GL_FALSE, 0, 0);
I think the problem is that each argument has its own VAO and VBO but the shader needs the data of all arguments when it is executed.
I can obviously only bind one buffer at a time.
The examples I've seen so far only use one VAO and one VBO and create a C structure containing all the data needed.
This however would make my current modular approach much harder.
Isn't there any option to have OpenGL copy the data so it doesn't need to be available and bound when glDraw... is called?
Edit
I found out that using a shared Vertex Array Object is enough to solve the issue.
However, I would appreciate some more insight on when things are actually copied to the GPU.
The glVertexAttribPointer function takes these parameters:
void glVertexAttribPointer(
GLuint index,
GLint size,
GLenum type,
GLboolean normalized,
GLsizei stride,
const GLvoid * pointer);
I think the problem is that you insert the number of vertices into the "size" parameter. That is not what it is meant for. It controls how many components will the attribute have. When the vertex attribute is meant to be vec3 "size" should be set to 3, when using floats it should be 1 and so on.
EDIT: As Reto Koradi pointed out, using 0 as "stride" is fine in this case.

OpenGL: does a texture have storage?

In OpenGL, after a texture name is generated, the texture does not have storage. With glTexImage2D you can create storage for the texture.
How can you determine if a texture has storage?
You can't do exactly that in ES 2.0. In ES 3.1 and later, you can call:
GLint width = 0;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_WIDTH, &width);
if (width > 0) {
// texture has storage
}
The glIsTexture() call that is available in ES 2.0 may give you the desired information depending on what exactly your requirements are. While it will not tell you if the texture has storage, it will tell you if the given id is valid, and if it was ever bound as a texture. For example:
GLuint texId = 0;
GLboolean isTex = glIsTexture(texId);
// Result is GL_FALSE because texId is not a valid texture name.
glGenTextures(1, &texId);
isTex = glIsTexture(texId);
// Result is GL_FALSE because, while texId is a valid name, it was never
// bound yet, so the texture object has not been created.
glBindTexture(GL_TEXTURE_2D, texId);
glBindTexture(GL_TEXTURE_2D, 0);
isTex = glIsTexture(texId);
// Result is GL_TRUE because the texture object was created when the
// texture was previously bound.
I believe you can use glGetTexLevelParameterfv to get the height (or width) of the texture. A value of zero for either of these parameters means the texture name represents the null texture.
Note I haven't tested this!

OpenGL Texture rendering as black

I'm using the Syphon framework to try and push frames of video from a server to a client application.
Syphon requires you to use OpenGL textures instead of normal images.
Because of this, I'm trying to render a CGImageRef as a texture and send it on to be published.
I'm creating my CGL context as so:
CGLPixelFormatAttribute attribs[13] = {
kCGLPFAOpenGLProfile, (CGLPixelFormatAttribute)kCGLOGLPVersion_3_2_Core, // This sets the context to 3.2
kCGLPFAColorSize, (CGLPixelFormatAttribute)24,
kCGLPFAAlphaSize, (CGLPixelFormatAttribute)8,
kCGLPFAAccelerated,
kCGLPFADoubleBuffer,
kCGLPFASampleBuffers, (CGLPixelFormatAttribute)1,
kCGLPFASamples, (CGLPixelFormatAttribute)4,
(CGLPixelFormatAttribute)0
};
CGLPixelFormatObj pix;
GLint npix;
CGLChoosePixelFormat(attribs, &pix, &npix);
CGLCreateContext(pix, 0, &_ctx);
I already have a CGImageRef that I know can be rendered properly as an NSImage.
I'm rendering the texture as so:
CGLLockContext(cgl_ctx);
if (_texture) {
glDeleteTextures(1, &_texture);
}
int width = 1920;
int height = 1080;
GLubyte* imageData = malloc(width * height * 4);
CGContextRef imageContext = CGBitmapContextCreate(imageData, width, height, 8, width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, width, height), image);
CGContextRelease(imageContext);
GLuint frameBuffer;
GLenum status;
glGenFramebuffersEXT(1, &frameBuffer);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, frameBuffer);
glGenTextures(1, &_texture);
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_TEXTURE_2D, imageData);
status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
if (status != GL_FRAMEBUFFER_COMPLETE_EXT) {
NSLog(#"OpenGL Error");
}
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
CGLUnlockContext(cgl_ctx);
The rendering code is in a different class, but the context should be passed through and is the same.
I've tried the advice in pretty much every other instance of this problem to no avail.
The second-to-last parameter in glTexImage2D is:
type
Specifies the data type of the pixel data. The following symbolic values are accepted:
GL_UNSIGNED_BYTE, GL_BYTE, GL_UNSIGNED_SHORT, GL_SHORT, GL_UNSIGNED_INT, GL_INT, GL_FLOAT, GL_UNSIGNED_BYTE_3_3_2, GL_UNSIGNED_BYTE_2_3_3_REV, GL_UNSIGNED_SHORT_5_6_5, GL_UNSIGNED_SHORT_5_6_5_REV, GL_UNSIGNED_SHORT_4_4_4_4, GL_UNSIGNED_SHORT_4_4_4_4_REV, GL_UNSIGNED_SHORT_5_5_5_1, GL_UNSIGNED_SHORT_1_5_5_5_REV, GL_UNSIGNED_INT_8_8_8_8, GL_UNSIGNED_INT_8_8_8_8_REV, GL_UNSIGNED_INT_10_10_10_2, and GL_UNSIGNED_INT_2_10_10_10_REV.
GL_TEXTURE_2D doesn't make sense there, it should be whatever the data type of elements of imageData is.
You should also be checking your OpenGL errors with glGetError or ARB_debug_output. You would have immediately be shown what's wrong:
Source:OpenGL Type:Error ID:5 Severity:High Message:GL_INVALID_ENUM in glTexImage2D(incompatible format = GL_RGBA, type = GL_TEXTURE_2D)
There are a few problems in this code. The following are critical to get things working:
As also pointed out in an earlier answer by #orost, the type parameter for the glTexImage2D() call is invalid. It should be:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, NULL);
The texture is never attached as an FBO target. While you set up the FBO, you need:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, _texture, 0);
There are a couple more items that probably won't keep you from getting it running, but I would recommend to change them anyway:
You don't need to specify data for the texture if you're going to create the content by rendering to it. The data you pass to glTexImage2D() is uninitialized anyway, so that can't do much good. It's much cleaner to pass NULL as the data argument, as I already did in the call shown above.
Since you're using OpenGL 3.2, there's really no need to use the EXT form of the FBO entry points. This is standard functionality in OpenGL 3.x. The EXT form will probably work as long as you use it consistently, but you risk ugly surprises if you mix it with standard entry points.

Changing the alpha/opacity channel on a texture using GLKit

I am new to opengl es, and I can't seem to figure out how you would change the alpha / opacity
on a texture loaded with GLKTextureLoader.
Right now I just draw the texture with the following code.
self.texture.effect.texture2d0.enabled = YES;
self.texture.effect.texture2d0.name = self.texture.textureInfo.name;
self.texture.effect.transform.modelviewMatrix = [self modelMatrix];
[self.texture.effect prepareToDraw];
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
NKTexturedQuad _quad = self.texture.quad;
long offset = (long)&_quad;
glVertexAttribPointer(GLKVertexAttribPosition,
2,
GL_FLOAT,
GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, geometryVertex)));
glVertexAttribPointer(GLKVertexAttribTexCoord0,
2,
GL_FLOAT, GL_FALSE,
sizeof(NKTexturedVertex),
(void *)(offset + offsetof(NKTexturedVertex, textureVertex)));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Any advice would be very helpful :)
i am no gl expert but to draw with a changed alpha value does not seem to work as described by rickster.
as far as i understand, the values passed to glBlendColor are only used when using glBlendFunc constants like: GL_CONSTANT_…
this will overwrite the textures alpha values and use a defined value to draw with:
glEnable(GL_BLEND);
glBlendFunc(GL_CONSTANT_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, yourAlphaValue);
glDraw... // the draw operations
further reference can be found here http://www.opengl.org/wiki/Blending#Blend_Color
As long as you're in the OpenGL ES 1.1 world (or the emulated-1.1 world of GLKBaseEffect), alpha is a property either of the (per-pixel) bitmap data in the texture or of the (complete) OpenGL ES state you're drawing with. You can't set an opacity level for a texture as a whole, on its own. So, you have three options:
Change the alpha of the texture. This means changing the texture bitmap data itself -- use the 2D image context of your choice to draw the image at half (or whatever) alpha, and read the resulting image into an OpenGL ES texture. Probably not a great idea unless the alpha you want will be constant for the life of your app. In which case you might as well just go back to Photoshop (or whatever you're using to create your image assets) and set the alpha there.
Change the alpha you're drawing with. After you prepareToDraw, set up blending in GL:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA);
glBlendColor(1.0, 1.0, 1.0, myDesiredAlphaValue);
glDraw... // whatever you're drawing
Don't forget to A) draw your partially transparent content after any content you want it blended on top of and B) disable blending before rendering opaque content again on the next frame.
Ditch GLKBaseEffect and write your own shaders. Shaders that work like the 1.1 fixed-function pipeline are a dime a dozen -- you can even get started by using the shaders packaged with the Xcode "OpenGL Game" project template or looking at the shaders GLKit writes in the Xcode Frame Capture tool. Once you have such shaders, changing the alpha of a color you got out of a texel lookup is a simple operation:
vec4 color = texture2D(texUnit, texCoord);
color.a = myDesiredAlphaValue;
gl_FragColor = color;