glDrawArrays not drawing correctly - objective-c

I made a painting program. Everything works as I expected. But while drawing, sometimes some strange things happen.
I run app, and press left mouse button on image. It should draw point from code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, brushTextura);
glPointSize(100);
glVertexPointer(2, GL_FLOAT, 0,GLVertices);
glDrawArrays(GL_POINTS, 0, count);
glDisableClientState(GL_VERTEX_ARRAY);
at point where I press. mouseDown registers mouseDown location, converts it to NSValue, sends to array, and then before drawing I extract NSValue to CGPoint and send it to GLfloat so that it could be drawn by glDrawArrays. But no matter where I click the mouse on the image it draws the point at coordinates (0,0). After that every thing works OK. See image:
This was first problem. The second problem is that when I paint with it (drag pressed mouse), sometimes points appear where they are not drawn. Image:
When I continue drag it disappears. After some dragging it appears again and disappears again. And so on. Image:
Any Ideas why it could happen? I will post code bellow:
Mouse down:
- (void) mouseDown:(NSEvent *)event
{
location = [self convertPoint: [event locationInWindow] fromView:self];
NSValue *locationValue = [NSValue valueWithPoint:location];
[vertices addObject:locationValue];
[self drawing];
}
Mouse dragged:
- (void) mouseDragged:(NSEvent *)event
{
location = [self convertPoint: [event locationInWindow] fromView:self];
NSValue *locationValue = [NSValue valueWithPoint:location];
[vertices addObject:locationValue];
[self drawing];
}
Drawing:
- (void) drawing {
int count = [vertices count] * 2;
NSLog(#"count: %d", count);
int currIndex = 0;
GLfloat *GLVertices = (GLfloat *)malloc(count * sizeof(GLfloat));
for (NSValue *locationValue in vertices) {
CGPoint loc = locationValue.pointValue;
GLVertices[currIndex++] = loc.x;
GLVertices[currIndex++] = loc.y;
}
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, brushTextura);
glPointSize(100);
glVertexPointer(2, GL_FLOAT, 0, GLVertices);
glDrawArrays(GL_POINTS, 0, count);
glDisableClientState(GL_VERTEX_ARRAY);
}

You are setting your count variable (the one used in glDrawArrays) to [vertices count] * 2, which seems strange.
The last argument to glDrawArrays is the number of vertices to draw, whereas in your code it seems you are setting it to double the number (maybe you thought it's the number of floats?), which means you are just drawing rubbish after the first count vertices.

The fact the vertices aren't rendered in the exact location you clicked on should be a hint the problem is you've not properly determined the hit point within the view.
Your code has:
location = [self convertPoint: [event locationInWindow] fromView: self];
which tells the view to convert the point from its coordinates (self) to the same view's coordinates (self), even though the point is actually relative to the window.
To convert the point from the window's coordinates to the view, change that line to the following:
location = [self convertPoint: [event locationInWindow] fromView: nil];

The arguments to glDrawArrays are defined as (GLenum mode, GLint first, GLsizei count).
The second arguments defines the first index of the vertex attributes used when drawing. You're passing 1 as the first index which makes your vertex coordinates unmatch. I assume that you want 0 there.
http://www.opengl.org/sdk/docs/man/xhtml/glDrawArrays.xml

Related

Moved annotation in PDFView but it moves to the opposite direction

I have written the code to move the annotation while dragging the mouse, however the result shows me the annotation moved to the opposite direction of my mouse move trace. It moves with center at the mouse first click point, centrosymmetric opposite... I draw a Rect for the annotaion's bound, but the Rect position is correct...
There is two images show you what happened:
-Before I drag:
After I moved:
The annotation goes away ....
Below is my code:
- (void) mouseDragged: (NSEvent *) theEvent
{
// Move annotation.
// Hit test, is mouse still within page bounds?
if (NSPointInRect([self convertPoint: mouseLoc toPage: activePage],
[[_activeAnnotation page] boundsForBox: [self displayBox]]))
{
// Calculate new bounds for annotation.
newBounds = currentBounds;
newBounds.origin.x = roundf(endPt.x - _clickDelta.x);
newBounds.origin.y = roundf(endPt.y - _clickDelta.y);
}
else
{
// Snap back to initial location.
newBounds = _wasBounds;
}
// Change annotation's location.
[_activeAnnotation setBounds: newBounds];
// Call our method to handle updating annotation geometry.
[self annotationChanged];
// Force redraw.
dirtyRect = NSUnionRect(currentBounds, newBounds);
dirtyRect.origin.x -= 10;
dirtyRect.origin.y -= 10;
dirtyRect.size.width += 20;
dirtyRect.size.height += 20;
[super setNeedsDisplay:YES];
[self setNeedsDisplayInRect:RectPlusScale([self convertRect: dirtyRect fromPage: [_activeAnnotation page]], [self scaleFactor])];
[super mouseDragged: theEvent];
The weird thing is, if the annotation is a image or text, the moved would be correct. It only happens with PDFAnnotationInk class....

Receive global TouchEvents with NSEvent

I just try to receive Touch Events globally in the Window and not only in a view.
In my code, you can see below, i will get the absolute position of the touch in the magic trackpad. As long as the cursor is inside the view (the red NSRect) it works fine, but how can i receive touches outside of this view.
I searched for solutions in many communities and the apple devcenter but found nothing.
I think the problem is this: NSSet *touches = [ev touchesMatchingPhase:NSTouchPhaseTouching inView:nil]; Isn't there a method in NSEvent that gets every touch?
Hope anybody can help me.
Here my Implementation:
#implementation MyView
- (id)initWithFrame:(NSRect)frame {
self = [super initWithFrame:frame];
if (self) {
// Initialization code here.
[self setAcceptsTouchEvents:YES];
myColor = [NSColor colorWithDeviceRed:1.0 green:0 blue:0 alpha:0.5];
}
return self;
}
- (void)drawRect:(NSRect)dirtyRect {
// Drawing code here.
NSRect bounds = [self bounds];
[myColor set];
[NSBezierPath fillRect:bounds];
}
- (void)touchesBeganWithEvent:(NSEvent *)ev {
NSSet *touches = [ev touchesMatchingPhase:NSTouchPhaseTouching inView:nil];
for (NSTouch *touch in touches) {
NSPoint fraction = touch.normalizedPosition;
NSSize whole = touch.deviceSize;
NSPoint wholeInches = {whole.width / 72.0, whole.height / 72.0};
NSPoint pos = wholeInches;
pos.x *= fraction.x;
pos.y *= fraction.y;
NSLog(#"%s: Finger is touching %g inches right and %g inches up "
#"from lower left corner of trackpad.", __func__, pos.x, pos.y);
}
}
Calling touchesMatchingPhase:inView: with nil for the last parameter (the way you are doing) will get all touches. The problem is that touchesBeganWithEvent: will simply not fire for a control that isn't in the touch area.
You can make the view the first responder, which will send all events to it first. See becomeFirstResponder and Responder object

Updating NSRect size with negative value

Questions regarding NSRect...In the Hillegass book we are creating an NSRect into which we are drawing an oval (NSBezierPath *). Depending on where in our view we mouse down and subsequently drag, the NSRect's size.width and/or size.height may be negative (i.e. if we start in upper right, drag lower left - both are negative). When actually drawing, does the system use our negative width and/or height to merely locate the NSPoint of where we dragged? Thus updating the NSRect? And if we ever need the size of the NSRect, should we just use the absolute values?
In the chapter, the authors used the MIN() and MAX() macros to create an NSRect. However, in the challenge solution they provide these three methods in response to mouse events:
- (void)mouseDown:(NSEvent *)theEvent
{
NSPoint pointInView = [self convertPoint:[theEvent locationInWindow] fromView:nil];
// Why do we offset by 0.5? Because lines drawn exactly on the .0 will end up spread over two pixels.
workingOval = NSMakeRect(pointInView.x + 0.5, pointInView.y + 0.5, 0, 0);
[self setNeedsDisplay:YES];
}
- (void)mouseDragged:(NSEvent *)theEvent
{
NSPoint pointInView = [self convertPoint:[theEvent locationInWindow] fromView:nil];
workingOval.size.width = pointInView.x - (workingOval.origin.x - 0.5);
workingOval.size.height = pointInView.y - (workingOval.origin.y - 0.5);
[self setNeedsDisplay:YES];
}
- (void)mouseUp:(NSEvent *)theEvent
{
[[self document] addOvalWithRect:workingOval];
workingOval = NSZeroRect; // zero rect indicates we are not presently drawing
[self setNeedsDisplay:YES];
}
This code produces a successful rectangle regardless of the potential negative values. I understand that the negative values merely reflect the shift left with respect to the origin ( the point from which we "mouse downed"). What is going on behind the scenes in properly calculating the NSPoint to which we dragged?
An NSRect is just defined as an NSPoint and an NSSize. An NSSize is just defined as a pair of CGFloats, but the documentation says:
Normally, the values of width and height are non-negative. The functions that create an NSSize structure do not prevent you from setting a negative value for these attributes. If the value of width or height is negative, however, the behavior of some methods may be undefined.
In the code you show above, absolutely nothing fancy is going on behind the scenes. You're creating a rectangle (workingOval) that happens to have a negative size or width, and you're not actually using it anywhere.
Depending on what you do with workingOval elsewhere, what's going on behind the scenes will be different. But it'll be one of three very simple things. Some methods treat a rect like (origin=(30, 40), size=(-10, -20)) as identical to (origin=(20, 20), size=(10, 20)); others treat it as an invalid rect; some make assumptions that they don't test and just give you garbage results. For example, NSMinX will return 30, not 10.

How to detect which word touched with CoreText

I am using CoreText to layout a custom view. The next step for me is to detect which word is tapped on a touch event/gesture event. I have done research on this and found advise on how to custom label a url to receive touches - but nothing generic. Does any one have any idea on how to do this?
UPDATE:
Here is the code within my drawRect: method
self.attribString = [aString copy];
CGContextRef context = UIGraphicsGetCurrentContext();
// Flip the coordinate system
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, 0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGMutablePathRef path = CGPathCreateMutable(); //1
CGPathAddRect(path, NULL, self.bounds );
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((__bridge CFAttributedStringRef)aString); //3
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, [aString length]), path, NULL);
CTFrameDraw(frame, context); //4
UIGraphicsPushContext(context);
frameRef = frame;
CFRelease(path);
CFRelease(framesetter);
Here is where I am trying to handle the touch event
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint point = [touch locationInView:self];
CGContextRef context = UIGraphicsGetCurrentContext();
CFArrayRef lines = CTFrameGetLines(frameRef);
for(CFIndex i = 0; i < CFArrayGetCount(lines); i++)
{
CTLineRef line = CFArrayGetValueAtIndex(lines, i);
CGRect lineBounds = CTLineGetImageBounds(line, context);
NSLog(#"Line %ld (%f, %f, %f, %f)", i, lineBounds.origin.x, lineBounds.origin.y, lineBounds.size.width, lineBounds.size.height);
NSLog(#"Point (%f, %f)", point.x, point.y);
if(CGRectContainsPoint(lineBounds, point))
{
It seems that CTLineGetImageBounds is returning a wrong origin (the size seems correct) here is one example of the NSLog "Line 0 (0.562500, -0.281250, 279.837891, 17.753906)".
There is no "current context" in touchesEnded:withEvent:. You are not drawing at this point. So you can't call CTLineGetImageBounds() meaningfully.
I believe the best solution here is to use CTFrameGetLineOrigins() to find the correct line (by checking the Y origins), and then using CTLineGetStringIndexForPosition() to find the correct character within the line (after subtracting the line origin from point). This works best for simple stacked lines that run the entire view (such as yours).
Other solutions:
Calculate all the line rectangles during drawRect: and cache them. Then you can just do rectangle checks in touchesEnded:.... That's a very good solution if drawing is less common than tapping. If drawing is significantly more common than tapping, then that's a bad approach.
Do all your calculations using CTLineGetTypographicBounds(). That doesn't require a graphics context. You can use this to work out the rectangles.
In drawRect: generate a CGLayer with the current context and store it in an ivar. Then use the context from the CGLayer to calculate CTLineGetImageBounds(). The context from the CGLayer will be "compatible" with the graphics context you're using to draw.
Side note: Why are you calling UIGraphicsPushContext(context); in drawRect:? You're setting the current context to the current context. That doesn't make sense. And I don't see a corresponding UIGraphicsPopContext().

GL_LINES drawing erratically (iPad + OpenGL ES1)

I'm experiencing a really strange glitch in my ipad app. It's super simple: I just use the "touchesMoved" handler to draw a line between two points. Since I would like lines to stay on screen, I'm not calling "glClear" in my draw function, but for some reason some of the lines just drop out, and it appears completely random. Stranger yet, it works perfectly in the simulator. Does anybody have any insight into why this might be? I've included my touch and draw routines.
Many thanks!
Pete
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
NSSet *allTouches = [event allTouches];
switch ([allTouches count])
{
case 1:
{ //Single touch
} break;
case 2:
{ //Double Touch
UITouch *touch1 = [[allTouches allObjects] objectAtIndex:0];
CGPoint location1 = [touch1 locationInView: [touch1 view]];
UITouch *touch2 = [[allTouches allObjects] objectAtIndex:1];
CGPoint location2 = [touch2 locationInView: [touch2 view]];
[self drawLineWithStart:location1 end:location2];
} break;
default:
break;
}
}
- (void)drawLineWithStart:(CGPoint)start end:(CGPoint) end
{
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
GLfloat lineVertices[] =
{
(start.x/(768.0/2.0)) - 1.0, -1.5 * ((start.y/(1024.0/2.0)) - 1.0),
(end.x/(768.0/2.0)) - 1.0, -1.5 * ((end.y/(1024.0/2.0)) - 1.0)
};
glDisable(GL_TEXTURE_2D);
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, lineVertices);
glDrawArrays(GL_LINE_STRIP, 0, 2);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
This code does a presentRenderBuffer in a double buffered context without a clear. This means its drawing to the one buffer, presenting it, drawing to the other buffer, presenting it, etc. As a result no one visible buffer has all of the line drawing and the swap (present) will show differences in the alternating buffers. The simulator double buffering scheme is different than the physical devices which explains the difference you're seeing.
Accumulate the lines in a data structure. Each frame do the clear then draw all of the lines.
Depending on the situation, you can also retain the backing. This will solve your issue here, but will cost you performance.
In your OpenGLES2DView.m change kEAGLDrawablePropertyRetainedBacking to YES. It will look like this:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
But to reiterate, it will cost performance, however I would say if you are trying to write a drawing application (from the looks of it) it might be what you are looking for. Since in drawing apps, it is not a good idea to rerun the logic to redraw every frame, this is a good solution.