I am trying to apply post processing effects to a small test program for OS X. I am trying to render the image into a texture and then render the texture to the screen (i haven't even gotten to writing a second shader for the final image). My code still just renders a black screen.
Here is the code to create the FBO. It is executed once at the beginning.
-(void)setupFramebuffer{
[self setOpenGLContext:self.openGLContext];
glEnable(GL_TEXTURE_2D);
CGSize textureSize = CGSizeMake(1024, 1024);
glGenFramebuffersEXT(1, &textureFramebuffer);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, textureFramebuffer);
glGenTextures(1, &texture);
textureImage = MFGLImageCreateWithImageID(texture, textureSize);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, textureSize.width, textureSize.height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, texture, 0);
if(glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT) != GL_FRAMEBUFFER_COMPLETE_EXT)
NSLog(#"Failed to create FBO: %i", glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT));
glBindTexture(GL_TEXTURE_2D, 0);
g_CurrentlyBoundTexture = 0;
glBindFramebufferEXT(GL_FRAMEBUFFER, 0);
}
Here is the rendering code.
- (void)drawRect:(NSRect)dirtyRect
{
glUniform4f(g_OffsetUniform, dirtyRect.size.width/2.0, dirtyRect.size.height/2.0, 1, 1);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, textureFramebuffer);
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glClearColor(0, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);
[batchRenderer addFrame:CGRectMake(0, 0, textureImage.size.width, textureImage.size.height) withAlpha:1 forImage:textureImage renderAlpha:YES offset:0 color:[NSColor redColor] renderMode:GL_TRIANGLES];
[batchRenderer finishAndDrawBatch];
glFlush();
}
Right now i am just clearing the FBO buffer and writing a red color to it. Then i pass the image to my batch renderer for rendering, however i still get nothing but a black screen. (The batch renderer is not the issue, the exact same code is used in other programs) Furthermore, when i comment out the code that binds the FBO in the draw method it draws the red without an issue.
Related
I'm learning OpenGL ES with the tutorial on raywenderlich.com http://www.raywenderlich.com/4404/opengl-es-2-0-for-iphone-tutorial-part-2-textures
When I get started to tweak the example project to grasp a UIView's content as the texture to get rendered, it's only black screen turns out just like:
The black view is the OpenGL ES view.
I used the code posted by Tommy in this post: Render contents of UIView as an OpenGL texture, and here is my version:
- (GLuint)createTexture:(UIView *)view
{
size_t width = CGRectGetWidth(view.layer.bounds) * [UIScreen mainScreen].scale;
size_t height = CGRectGetHeight(view.layer.bounds) * [UIScreen mainScreen].scale;
GLubyte * texturePixelBuffer = (GLubyte *) calloc(width * height * 4,
sizeof(GLubyte));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(texturePixelBuffer,
width, height, 8, width*4, colorSpace,
kCGImageAlphaPremultipliedLast |
kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
[view.layer renderInContext:context];
CGContextRelease(context);
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, texturePixelBuffer);
free(texturePixelBuffer);
return texName;
}
According to Open GL ES 2.0 Specification, texture can either be the power of two or not. But you must use
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
to enable non-power-of-two texture support.
It's a very strange problem. I create a texture using the method copied from apple's sample. It's works fine in apple's sample, but not in mine project. The texture is not shows up, only the color define by glcolor4f. I used glistexture and glgeterror to check, they tells that nothing wrong there. This only happens at the first time I load the texture. If I release the texture and reload it, It works, with the same code. Are there any other ways to check errors of OpenGL?
here's the code I'm using to load the texture:
- (FETexture *)loadTextureWithPath:(NSString*)name{
NSURL *url = nil;
CGImageSourceRef src;
CGImageRef image;
CGContextRef context = nil;
CGColorSpaceRef colorSpace;
GLuint textureId;
GLuint pboId;
GLubyte *data;
url = [NSURL fileURLWithPath: name];
src = CGImageSourceCreateWithURL((__bridge CFURLRef)url, NULL);
if (!src) {
NSLog(#"No image");
return nil;
}
image = CGImageSourceCreateImageAtIndex(src, 0, NULL);
CFRelease(src);
GLuint width = (GLint)CGImageGetWidth(image);
GLuint height = (GLint)CGImageGetHeight(image);
data = (GLubyte*) calloc(width * height * 4, sizeof(GLubyte));
colorSpace = CGColorSpaceCreateDeviceRGB();
context = CGBitmapContextCreate(data, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Host);
CGColorSpaceRelease(colorSpace);
// Core Graphics referential is upside-down compared to OpenGL referential
// Flip the Core Graphics context here
// An alternative is to use flipped OpenGL texture coordinates when drawing textures
CGContextTranslateCTM(context, 0.0, height);
CGContextScaleCTM(context, 1.0, -1.0);
// Set the blend mode to copy before drawing since the previous contents of memory aren't used. This avoids unnecessary blending.
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
CGImageRelease(image);
glGenTextures(1, &textureId);
glGenBuffers(1, &pboId);
// Bind the texture
glBindTexture(GL_TEXTURE_2D, textureId);
// Bind the PBO
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, pboId);
// Upload the texture data to the PBO
glBufferData(GL_PIXEL_UNPACK_BUFFER, width * height * 4 * sizeof(GLubyte), data, GL_STATIC_DRAW);
// Setup texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
// OpenGL likes the GL_BGRA + GL_UNSIGNED_INT_8_8_8_8_REV combination
// Use offset instead of pointer to indictate that we want to use data copied from a PBO
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,
GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, 0);
// We can delete the application copy of the texture data now
free(data);
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
FETexture *texture = [FETexture new];
texture.textureID = textureId;
texture.bufferID = pboId;
texture.width = width;
texture.height = height;
return texture;
You've not shown any of the relevant code for the draw site. Perhaps you're not binding this texture at draw time.
You can use the OpenGL Profiler in "Graphics tools for Xcode" at https://developer.apple.com/downloads/index.action to debug GL errors and state.
i like to render a simple texture with my fragment shader to 4 vertices and read the image resolution with glReadPixels. I set the (readPixel) size like the (source)picture size but i didn't get a complete image back. There is always a black bar on the right side. And the image seems to be compressed.
The returned part of the image is correct. It shows the resolution of my sobel shader. So i didn't think that there are some errors on the ReadPixel part or the SetImage part. But i don't know...
Here is my method to set the image source:
-(void)setImageSource : (unsigned char*) image
{
static const GLfloat textureVertices[] =
{
0.0f, 1.0f,
0.0f, 0.0f,
1.0f, 1.0f,
1.0f, 0.0f,
};
static const GLfloat squareVertices[] =
{
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f
};
glGenTextures(1, &pictureTexture);
glActiveTexture(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, pictureTexture);
glUniform1i(uniforms[UNIFORM_VIDEOFRAME], 0);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, textureVertices);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageHeight, imageWidth, 0, GL_RGBA, GL_UNSIGNED_BYTE, image);
}
Here the part to render the texture:
-(void)render
{
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glViewport(0, 0, imageHeight, imageWidth);
[self presentFramebuffer];
}
And here is the part to read the resolution back:
-(void)readPixels : (unsigned char*) dest
{
glReadPixels(0, 0, imageHeight, imageWidth, GL_RGBA, GL_UNSIGNED_BYTE, dest);
glDeleteTextures(1, &pictureTexture);
}
I don't have any idea where i make the error. I've searched on this forum and on the khronos group forum but i didn't get a solution for this (and i didn't find a case with the same error description).
Maybe another important or confusing information - I've also tried to put the code to a c++ class. But when i go outside the Objective C class with the EAGLContext i got the correct picture size back but the resolution is wrong the resolution image contains just snow but without the black bar on the side.
Did someone knew a solution for this error?
Regards,
krikit
i have solved the problem. The error was in the initialization of the OpenGL ES view. The part where i create the rectangle for the initWithFrame.
CGRectMake(0.0f, 0.0f, applicationFrame.size.width, applicationFrame.size.height)
There is the wrong size when i set the size by myself i get a complete picture in the destination.
The other part with the snow on my rendered picture comes from the wrong datatype. When i cast an variable to GLuint it's not the same then initialize a new GLuint variable and set it with the value... but i don't know why.
unsigned int j = 20
GLuint i = 20;
glReadPixels(0, 0, j, (GLuint)j, GL_RGBA, GL_UNSIGNED_BYTE, pictureDest);
When i work with the GLuint variables instead of the casted elements everything works fine.
krikit
ES 2.0 newbie here. I'm currently trying to make some 3D interlaced images from stereo images with ES 2.0 and the PowerVR SDK. I can output one image texture fine, but when I try to output the second one, I seem to be overwriting the first. So, my question is, given the fragment shader below, can I use it to draw two textures, or can the sampler2d uniform only be bound to one texture unit?
Here's the fragment shader (taken from the PowerVR "training course" sample programs):
uniform sampler2D sampler2d;\
varying mediump vec2 myTexCoord;\
void main (void)\
{\
gl_FragColor = texture2D(sampler2d,myTexCoord);\
}";
And here's how I am loading the image textures into the shader:
//LEFT IMAGE
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &m_uiTexture_left);
glBindTexture(GL_TEXTURE_2D, m_uiTexture_left);
glUniform1i(glGetUniformLocation(m_uiProgramObject, "sampler2d"), 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, pwr2Width, pwr2Height,0, GL_RGB, GL_UNSIGNED_BYTE, xImageL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLfloat afVertices[30] = {0.0};
genVertices(afVertices);
glGenBuffers(1, &m_ui32Vbo_leftimage);
m_ui32VertexStride = 5 * sizeof(GLfloat); // 3 floats for the pos, 2 for the UVs
glBindBuffer(GL_ARRAY_BUFFER, m_ui32Vbo_leftimage);
glBufferData(GL_ARRAY_BUFFER, 6 * m_ui32VertexStride, afVertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//RIGHT IMAGE
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &m_uiTexture_right);
glBindTexture(GL_TEXTURE_2D, m_uiTexture_right);
glUniform1i(glGetUniformLocation(m_uiProgramObject, "sampler2d"), 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, pwr2Width, pwr2Height,0, GL_RGB, GL_UNSIGNED_BYTE, xImageR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenBuffers(1, &m_ui32Vbo_rightimage);
m_ui32VertexStride = 5 * sizeof(GLfloat); // 3 floats for the pos, 2 for the UVs
glBindBuffer(GL_ARRAY_BUFFER, m_ui32Vbo_rightimage);
glBufferData(GL_ARRAY_BUFFER, 6 * m_ui32VertexStride, afVertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
So, is this code just overwriting the sampler2d uniform? Do I need another uniform or shader for the second image?
Ok, so as I mentioned in the comment above, I eventually found the answer here: http://www.opentk.com/node/2559. As mentioned by the user "Profet" on that forum, one way to re-use the fragment shader is to move the glUniform1i() calls down to the render function, i.e. call glUniform1i() to pass the first texture to the shader, call glDraw (draw your texture), then call glUniform1i() and pass the second texture to the shader (and draw). So now my initialise() code is as follows:
//LEFT IMAGE
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &m_uiTexture_left);
glBindTexture(GL_TEXTURE_2D, m_uiTexture_left);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, pwr2Width, pwr2Height,0, GL_RGB, GL_UNSIGNED_BYTE, xImageL);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GLfloat afVertices[30] = {0.0};
genVertices(afVertices);
glGenBuffers(1, &m_ui32Vbo_leftimage);
m_ui32VertexStride = 5 * sizeof(GLfloat); // 3 floats for the pos, 2 for the UVs
glBindBuffer(GL_ARRAY_BUFFER, m_ui32Vbo_leftimage);
glBufferData(GL_ARRAY_BUFFER, 6 * m_ui32VertexStride, afVertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//RIGHT IMAGE
glActiveTexture(GL_TEXTURE1);
glGenTextures(1, &m_uiTexture_right);
glBindTexture(GL_TEXTURE_2D, m_uiTexture_right);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, pwr2Width, pwr2Height,0, GL_RGB, GL_UNSIGNED_BYTE, xImageR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenBuffers(1, &m_ui32Vbo_rightimage);
m_ui32VertexStride = 5 * sizeof(GLfloat); // 3 floats for the pos, 2 for the UVs
glBindBuffer(GL_ARRAY_BUFFER, m_ui32Vbo_rightimage);
glBufferData(GL_ARRAY_BUFFER, 6 * m_ui32VertexStride, afVertices, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
And my render() code goes like:
//&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& LEFT IMAGE &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_NOTEQUAL, 1, 1); //draw on uneven lines only
glUniform1i(glGetUniformLocation(m_uiProgramObject, "sampler2d"), 0); //pass texture unit 0 to shader
glBindBuffer(GL_ARRAY_BUFFER, m_ui32Vbo_leftimage); // unbound this near the end of initView, so need to re-bind it
glEnableVertexAttribArray(VERTEX_ARRAY);
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, GL_FALSE, m_ui32VertexStride, 0);
glEnableVertexAttribArray(TEXCOORD_ARRAY);
glVertexAttribPointer(TEXCOORD_ARRAY, 2, GL_FLOAT, GL_FALSE, m_ui32VertexStride, (void*) (3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindBuffer(GL_ARRAY_BUFFER, 0);
//&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& RIGHT IMAGE &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_EQUAL, 1, 1); //draw on even lines only
glUniform1i(glGetUniformLocation(m_uiProgramObject, "sampler2d"), 1);
glBindBuffer(GL_ARRAY_BUFFER, m_ui32Vbo_rightimage);
glEnableVertexAttribArray(VERTEX_ARRAY);
glVertexAttribPointer(VERTEX_ARRAY, 3, GL_FLOAT, GL_FALSE, m_ui32VertexStride, 0);
glEnableVertexAttribArray(TEXCOORD_ARRAY);
glVertexAttribPointer(TEXCOORD_ARRAY, 2, GL_FLOAT, GL_FALSE, m_ui32VertexStride, (void*) (3 * sizeof(GLfloat)));
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindBuffer(GL_ARRAY_BUFFER, 0);
Hope this helps someone someday! :)
The right way is when you send what texture you want to what texture unit you want:
http://www.opengl.org/wiki/Sampler_(GLSL)#Binding_textures_to_samplers
I am creating a frame buffer object for Render to Texture setup. It works perfectly fine on the iPhone simulator but on the Device the glCheckFramebufferStatus(GL_FRAMEBUFFER) function returns GL_FRAMEBUFFER_UNSUPPORTED at the end of the FBO creation.
I am testing it on iPhone 3GS with iOS 5.
Here is the code:
GLenum errNo;
GLsizei width = 320;
GLsizei height = 480;
GLuint textureHandle;
GLuint fboHandle;
glGetError();
glGenTextures(1, &textureHandle);
glBindTexture(GL_TEXTURE_2D, textureHandle);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,GL_HALF_FLOAT_OES, NULL);
glGenFramebuffers(1, &fboHandle);
glBindFramebuffer(GL_FRAMEBUFFER,fboHandle);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureHandle, 0);
errNo = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(GL_FRAMEBUFFER_COMPLETE != errNo){
printf("Unable to create FBO. errNo: %x\n",errNo);
}
I am clueless. How should I debug this problem ?
Found the solution. GL_HALF_FLOAT is not supported on iPhone. I used GL_UNSIGNED_BYTE instead and it works now.
Does not work:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,GL_HALF_FLOAT_OES, NULL);
Works:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,GL_UNSIGNED_BYTE, NULL);