glClear not working on execution - opengl-es-2.0

I am trying to create an interactive openGL program which takes user input and displays a shape. Everything seems to be working once the user inputs something...but on first execution, the glClear in the display function isn't actually clearing the screen...am I missing something somewhere?
Thanks!!
#include <stdlib.h>
#include <stdio.h>
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
void exit(int);
void myInit(void){
glClearColor(0.0, 0.0, 0.0, 1.0);
glColor3f(1.0f, 0.0f, 0.0f);
glLineWidth(4.0);
}
//SET WINDOW
void setWindow(double left, double right, double bottom, double top){
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, -1.0, 1.0); //viewing volume at origin, edge length 2
}
//SET VIEWPORT
void setViewport(int x, int y, int width, int height){
glViewport(x, y, width, height);
}
//DISPLAY FUNCTION
void myDisplay(void){
glClear(GL_COLOR_BUFFER_BIT);
}
void myKeyboard(unsigned char key, int mouseX, int mouseY){
switch(key)
{
case 'Q':
case'q':
exit(-1);
case 't':
//displayTriangle();
glClear(GL_COLOR_BUFFER_BIT);
glBegin(GL_POLYGON);
glVertex2f(-0.5, -0.5);
glVertex2f(0.0, 0.5);
glVertex2f(0.5, -0.5);
glEnd();
glFlush();
glutPostRedisplay();
break;
default:
break;
}
}
int main(int argc, char** argv){
glutInit(&argc, argv); //initialize toolkit
glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(640,480);
glutInitWindowPosition(100, 150);
glutCreateWindow("Interactive Shape Display!");
myInit();
glutDisplayFunc(myDisplay);
glutKeyboardFunc(myKeyboard);
glutMainLoop();
return 0;
}

glClear(GL_COLOR_BUFFER_BIT)
will clear the screen to the glClearColor() value. I'll run your code when I get a chance, but the glColor3f(1.0f, 0.0f, 0.0f) will set the color to red for your vertices (glVertex2f(-0.5, -0.5)...) until you change the glColor3f(). After you set the glClearColor() (black), you should be able to clear the screen again, anytime you call glClear(). You call it with myDisplay(). When you enter myKeyboard(), glClear () is called again, but the red glVertex2f()s are called immediately, which will draw over it.
When you say that the screen is not cleared on first execution, are you not seeing black initially? Will run the code and edit this if any results change for me.
Edit:
It's a good idea to move the drawing code to the display function. If you still want keyboard interaction, you could have a glutTimerFunc() in your main method:
glutTimerFunc(1, timerLoopCallback, 0);
The timer's callback would perform any logic based on the keyboard callbacks. So myKeyboard() registers the 't' key and the timer callback determines a triangle must be drawn (maybe setting a variable).
glutPostRedisplay();
should come next in the timer callback, then another call to the timer func (exactly as above). In the display func, the triangle is called if the timer callback determined it should.
This describes the relation between drawing and its separation from other logic. You might also want glutSwapBuffers(); at the end of your draw code.

Related

How can I optimize drawing Core Graphics grid for low memory?

In my app, I draw a grid in a custom view controller. This gets embedded in a scroll view, so the grid gets redrawn from time to time after the user zooms in or out (not during zooming).
The problem is, I'm trying to optimize this method for low-memory devices like the iPad Mini, and it's still crashing. If I take this drawing away entirely, the app works fine, and it doesn't give me a reason for the crash; it just tells me the connection was lost. I can see in my instruments that memory is spiking just before it crashes, so I'm pretty certain this is the issue.
The view is 2.5 times the screen size horizontally and vertically, set programmatically when it's created. It has multiple CALayers and this grid drawing happens on several of them. The crash happens either immediately when I segue to this view, or soon after when I do some zooming.
Here are my drawing methods (somewhat simplified for readability and because they're pretty redundant):
#define MARKER_FADE_NONE 0
#define MARKER_FADE_OUT 1 // fades out at widest zoom
#define MARKER_FADE_IN 2 // fades in at widest zoom
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
{
if (layer == gridSubdividers) [self drawGridInContext:ctx spacing:1 lineWidth:1 markers:NO fade:MARKER_FADE_NONE];
if (layer == gridDividers) [self drawGridInContext:ctx spacing:5 lineWidth:1 markers:YES fade:MARKER_FADE_OUT];
if (layer == gridSuperdividers) [self drawGridInContext:ctx spacing:10 lineWidth:2 markers:YES fade:MARKER_FADE_IN];
}
// DRAW GRID LINES
- (void)drawGridInContext:(CGContextRef)context
spacing:(float)spacing
lineWidth:(NSInteger)lineWidth
markers:(BOOL)markers
fade:(int)fade
{
spacing = _gridUnits * spacing;
CGContextSetStrokeColorWithColor(context, [UIColor gridLinesColor]);
CGContextSetLineWidth(context, lineWidth);
CGContextSetShouldAntialias(context, NO);
float top = 0;
float bottom = _gridSize.height;
float left = 0;
float right = _gridSize.width;
// vertical lines (right of origin)
CGMutablePathRef path = CGPathCreateMutable();
for (float i = _origin.x + spacing; i <= _gridSize.width; i = i + spacing) {
CGPathMoveToPoint(path, NULL, i, top);
CGPathAddLineToPoint(path, NULL, i, bottom);
}
CGContextAddPath(context, path);
CGContextStrokePath(context);
CGPathRelease(path);
}
...then I repeat this for() loop three more times to draw the other lines of the grid. I also tried this slightly different version of the loop, where I don't create an individual path, but instead just add lines to the context and then stroke only at the very end of all four of these loops:
for (float i = _origin.x + spacing; i <= _gridSize.width; i = i + spacing) {
CGContextMoveToPoint(context, i, top);
CGContextAddLineToPoint(context, i, bottom);
}
CGContextStrokePath(context);
I also tried a combination of the two, in which I began and stroked within each cycle of the loop:
for (float i = _origin.x + spacing; i <= _gridSize.width; i = i + spacing) {
CGContextBeginPath(context);
CGContextMoveToPoint(context, i, top);
CGContextAddLineToPoint(context, i, bottom);
CGContextStrokePath(context);
}
So how can I streamline this whole drawing method so that it doesn't use as much memory? If it can't draw one path at a time, releasing each one afterward, like the first loop I showed above...I'm not really sure what to do other than read the available memory on the device and turn off the grid drawing function if it's low.
I'm also totally open to alternative methods of grid drawing.

How to use glClearBuffer* in Cocoa OpenGLView to clear color buffer

I'm trying to understand how to use glClearBuffer* to change the background color in a (either single or double buffered) NSOpenGLView in Cocoa for OS X.
The following code fragment as suggested by the OpenGL Superbible fails with GL_INVALID_OPERATION:
GLfloat red[] = {1.0f, 0.0f, 0.0f, 1.0f};
glClearBufferfv(GL_COLOR, 0, red);
What do I need to supply for the second parameter?
I'm using a double buffered View extending OpenGLView.
#import "MyOpenGLView.h"
#include <OpenGL/gl3.h>
#implementation MyOpenGLView
-(void) drawRect: (NSRect) bounds
{
GLfloat red[] = {1.0f, 0.0f, 0.0f, 1.0f};
glClearBufferfv(GL_COLOR, 0, red);
GLenum e = glGetError();
// e == GL_INVALID_OPERATION after this call
// and the view is drawn in black
// The following two lines work as intended:
//glClearColor(1.0, 0.f, 0.f, 1.f);
//glClear(GL_COLOR_BUFFER_BIT);
[[self openGLContext] flushBuffer];
}
#end
Really? It is giving you GL_INVALID_OPEARATION?
This function is not supposed to generate that error... are you sure something earlier in your program did not create the error and you are mistaking the source?
The bigger problem however, is that using GL_COLOR as the buffer in this API call expects the second parameter to be an index into your set of draw buffers. It is unclear how your draw buffers are setup in this code, it is possible that you have GL_NONE. As there is no defined error behavior if you try to clear a draw buffer when GL_NONE is used, I suppose an implementation might choose to raise GL_INVALID_OPERATION.
In order for your current usage of glClearBufferfv (...) to make sense, I would expect to see something like this:
GLenum buffers [] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1 };
GLfloat red [] = { 1.0f, 0.0f, 0.0f, 1.0f };
glDrawBuffers (2, buffers);
glClearBufferfv (GL_COLOR, 0, red);
Now this call will clear GL_COLOR_ATTACHMENT0, if you wanted to clear GL_COLOR_ATTACHMENT1, you could replace 0 with 1.

Drawing with OpenGL

I'm working on a program which will calculate the centroids of some polygons. I have the centroid calculations in place. I would like to display the polygons with OpenGL. I have an OpenGL window up and running already.
In the OpenGL class there is a method, drawRect where you 'draw' the vertices to screen. I have however got the vertices I want to draw in a separate polygon class. Ideally I would like to call a draw method on a polygon, e.g.
firstPolygon.draw();
But I don't know how I would do that as the drawRect method is in the OpenGL class and thats the only way I know to draw. Can I somehow send data to the draw method from within the Polygon class? or draw directly to the screen within the polygon class?
Currently 'OpenGLView.m' contains:
#import "OpenGL/gl.h"
#import "OpenGLView.h"
#import "Poly.h"
#implementation OpenGLView
-(id)initWithFrame:(NSRect)frameRect
{
self = [super initWithFrame:frameRect];
if(self){
// initialise things here
}
return self;
}
-(void)drawRect:(NSRect)Rect
{
glClearColor(1.0f,1.0f,1.0f,1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(0.0f, 0.0f, 0.0f);
glBegin(GL_LINE_LOOP);
{
glVertex3f( 0.0, 0.6, 0.0);
glVertex3f( -0.2, -0.3, 0.0);
glVertex3f( 0.2, -0.3 ,0.0);
}
glEnd();
// finish drawing
glFlush();
}
#end
And the 'Polygon' class I would like to draw in this method so I can refer to the stored vertices easily..
-(void)drawPolygon
{
// draw vertices
}
If you call your draw() function from the context of the screen draw in opengl you should be okay. That means your draw() will have to still call the opengl appropriate draw calls of course.
I'm unfamiliar with opengl on objective-c but you can probably just pass the drawing context to the draw() function and have it call drawRect that way.

Graph not following orientation

so I am trying to draw some grid lines that in landscape go all the way down to the bottom, however when I switch to landscape the the graph doesn't follow and the grid lines go smaller.
I have set the
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation
{
// Return YES for supported orientations
return YES;
}
However it still doesn't work, here is my code. Can anyone spot the problem? this is the custom view file, the view controller is the default apart from the code above that returns yes.
.h file
#import <UIKit/UIKit.h>
#define kGraphHeight 300
#define kDefaultGraphWidth 900
#define kOffsetX 10
#define kStepX 50
#define kGraphBottom 300
#define kGraphTop 0
#interface GraphView : UIView
#end
And here is the implementation
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 0.6);
CGContextSetStrokeColorWithColor(context, [[UIColor lightGrayColor] CGColor]);
// How many lines?
int howMany = (kDefaultGraphWidth - kOffsetX) / kStepX;
// Here the lines go
for (int i = 0; i < howMany; i++)
{
CGContextMoveToPoint(context, kOffsetX + i * kStepX, kGraphTop);
CGContextAddLineToPoint(context, kOffsetX + i * kStepX, kGraphBottom);
}
CGContextStrokePath(context);
}
Any help would be appreciated
btw I am following this tutorial
http://buildmobile.com/creating-a-graph-with-quartz-2d/#fbid=YDPLqDHZ_9X
You need to do three things:
In Interface Builder, select the view containing the graph and drag it so it fills the screen.
In Interface Builder, set the Autosizing masks on the view so it continues to fill the screen after rotation. You can change the simulated metrics of the outer view to make sure this happens.
The kGraphTop and kGraphBottom constants mean it only draws from 0 to 300 pixels. You could just make kGraphBottom larger, but that would not be reliable. Instead, you want to find the size of the view bounds and fill them from top to bottom.
Here's how to fill the bounds:
- (void) drawRect:(CGRect)rect
{
// Get the size of the view being drawn to.
CGRect bounds = [self bounds];
CGContextSetLineWidth(context, 0.6);
CGContextSetStrokeColorWithColor(context, [[UIColor lightGrayColor] CGColor]);
// How many lines?
int howMany = (bounds.size.width - kOffsetX) / kStepX;
// Here the lines go
for (int i = 0; i < howMany; i++)
{
// Start at the very top of the bounds.
CGContextMoveToPoint(context, bounds.origin.x+kOffsetX + i * kStepX, bounds.origin.y);
// Draw to the bottom of the bounds.
CGContextAddLineToPoint(context, bounds.origin.x+kOffsetX + i * kStepX, bounds.origin.y+bounds.size.height);
}
CGContextStrokePath(context);
}

JOGL four blank initial frames

In JOGL, there is the addGLEventListener; I added a listener to it.
When the display() "callback on gl" is called, the screen is printed in black, but after four frames the display() prints something.
How to make display() print something on the first callback display()?
If you application implements an interface GLEventListener, there is always a next sequence:
—init();
—reshape();
—display().
In my opinion, you have a wrong drawing sequence in the function display().
Try do it in this way:
public void display(GLAutoDrawable drawable) {
gl = drawable.getGL();
gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT);
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
gl.glColor3f(1.0f, 1.0f, 1.0f);
gl.glBegin(GL.GL_POLYGON);
gl.glVertex2f(-0.5f, -0.5f);
gl.glVertex2f(-0.5f, 0.5f);
gl.glVertex2f(0.5f, 0.5f);
gl.glVertex2f(0.5f, -0.5f);
gl.glEnd();
drawable.swapBuffers(); // — it's for double buffering
}