GL_LINES drawing erratically (iPad + OpenGL ES1) - objective-c

I'm experiencing a really strange glitch in my ipad app. It's super simple: I just use the "touchesMoved" handler to draw a line between two points. Since I would like lines to stay on screen, I'm not calling "glClear" in my draw function, but for some reason some of the lines just drop out, and it appears completely random. Stranger yet, it works perfectly in the simulator. Does anybody have any insight into why this might be? I've included my touch and draw routines.
Many thanks!
Pete
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
NSSet *allTouches = [event allTouches];
switch ([allTouches count])
{
case 1:
{ //Single touch
} break;
case 2:
{ //Double Touch
UITouch *touch1 = [[allTouches allObjects] objectAtIndex:0];
CGPoint location1 = [touch1 locationInView: [touch1 view]];
UITouch *touch2 = [[allTouches allObjects] objectAtIndex:1];
CGPoint location2 = [touch2 locationInView: [touch2 view]];
[self drawLineWithStart:location1 end:location2];
} break;
default:
break;
}
}
- (void)drawLineWithStart:(CGPoint)start end:(CGPoint) end
{
[EAGLContext setCurrentContext:context];
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
GLfloat lineVertices[] =
{
(start.x/(768.0/2.0)) - 1.0, -1.5 * ((start.y/(1024.0/2.0)) - 1.0),
(end.x/(768.0/2.0)) - 1.0, -1.5 * ((end.y/(1024.0/2.0)) - 1.0)
};
glDisable(GL_TEXTURE_2D);
// Render the vertex array
glVertexPointer(2, GL_FLOAT, 0, lineVertices);
glDrawArrays(GL_LINE_STRIP, 0, 2);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}

This code does a presentRenderBuffer in a double buffered context without a clear. This means its drawing to the one buffer, presenting it, drawing to the other buffer, presenting it, etc. As a result no one visible buffer has all of the line drawing and the swap (present) will show differences in the alternating buffers. The simulator double buffering scheme is different than the physical devices which explains the difference you're seeing.
Accumulate the lines in a data structure. Each frame do the clear then draw all of the lines.

Depending on the situation, you can also retain the backing. This will solve your issue here, but will cost you performance.
In your OpenGLES2DView.m change kEAGLDrawablePropertyRetainedBacking to YES. It will look like this:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil];
But to reiterate, it will cost performance, however I would say if you are trying to write a drawing application (from the looks of it) it might be what you are looking for. Since in drawing apps, it is not a good idea to rerun the logic to redraw every frame, this is a good solution.

Related

How can I detect which subview is touched when dealing various moving subviews?

The following code produces an animation of an image of a shape from the top of the screen and it drifts downward using core animation. When the user taps, it will log whether the user tapped the image (the shape) or if they missed the shape and therefore touched the background. This seems to work fine. However what about when I add in other images of shapes? I'm looking for suggestions as to how to build onto this code to allow for more detailed information to be logged.
Let's say I want to programmatically add in a UIImage of triangle, a UIImage of a square, and a UIImage of a circle. I want all three images to start drifting from top to bottom. They may even overlap each other as they transition. I want to be able to log "You touched the square!" or whatever the appropriate shape I've touched. I want to be able to do so even if the square is positioned in between the triangle and the circle but part of the square is showing so I can tap it. (This example shows I'm not just wanting to interact with the top-most layer)
How do I tweak this code to programmatically add in different UIImages (various shape images perhaps) and be able to log which shape I'm touching?
- (void)viewDidLoad
{
[super viewDidLoad];
CGPoint endPoint = CGPointMake([[self view] bounds].size.width,
[[self view] bounds].size.height);
CABasicAnimation *animation = [CABasicAnimation
animationWithKeyPath:#"position"];
animation.fromValue = [NSValue valueWithCGPoint:[[_imageView layer] position]];
animation.toValue = [NSValue valueWithCGPoint:endPoint];
animation.duration = 30.0f;
[[_imageView layer] addAnimation:animation forKey:#"position"];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [[_imageView layer] convertPoint:thePoint toLayer:[[self view] layer]];
if([[_imageView layer].presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!");
// for now I'm just logging this information. Eventually I want to have the shape follow my figure as I move it to a new location. I want everything else to continue animating but I when I touch a particular shape I want to have complete control on repositioning that specific shape. That's just some insight beyond the scope of this question. However feel free to comment about this if you have suggestions.
}
else{
NSLog(#"backgound touched");
}
}
I'm thinking the answer to this may have something to do with looping the the various subviews. Look at how I'm thinking I might change the -touchesBegan method:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *t = [t anyObject];
CGPoint thePoint = [t locationInView:self.view];
for (UIView *myView in viewArray) {
if (CGRectContainsPoint(myView.frame, thePoint)) {....
Notice here I set up a viewArray and have put all my subviews in the viewArray. Is this something I should be using? Or perhaps something like the following if I was going to loop through my layers:
for(CALayer *mylayer in self.view.layer.sublayers)
No matter how much I try looping through my views and or layers I can't seem to get this to work. I feel like I may just be missing something obvious...
I think that the culprit is the line where you change the coordinate system for thePoint. It should probably read convertPoint:fromLayer: as prior to the execution of that line, your point is in the coordinate system of self.view and I'm assuming that you would like it to be in that of the imaveView. Alternately, you might skip that line altogether and call [t locationInView:_imageView] instead.

Need CCRenderTexture to render faster ios

I'm making a drawing app, and I'm having the users draw with CCRenderTexture. It basically keeps rendering a picture of a black circle to simulate drawing. When I move my finger slowly, it works really well since the circles come together to form a line. However, when I move my finger quickly, it ends up just being a bunch of circles that aren't connected (http://postimage.org/image/wvj3w632n/). My question is how I get the render texture to render the image faster or have it fill in the blanks for me.
Also, I'm not completely sold on this method, but it's what I've found while looking around. Feel free to suggest whatever you think would be better. I was originally using ccdrawline but it really killed my performance. Thanks!
The gaps between start point and the end points need to be sorted out. I am pasting code that might help you to resolve the situation you showed in the link.
in .h file
CCRenderTexture *target;
CCSprite* brush;
in the init method of .m file
target = [[CCRenderTexture renderTextureWithWidth:size.width height:size.height] retain];
[target setPosition:ccp(size.width/2, size.height/2)];
[self addChild:target z:1];
brush = [[CCSprite spriteWithFile:#"brush_i3.png"] retain];
add the touches method I am showing the touchesMoved code.
-(void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint start = [touch locationInView: [touch view]];
start = [[CCDirector sharedDirector] convertToGL: start];
CGPoint end = [touch previousLocationInView:[touch view]];
end = [[CCDirector sharedDirector] convertToGL:end];
printf("\n x= %f \t y= %f",start.x,start.y);
float distance = ccpDistance(start, end);
if (distance > 1)
{
int d = (int)distance;
for (int i = 0; i < d; i++)
{
float difx = end.x - start.x;
float dify = end.y - start.y;
float delta = (float)i / distance;
[brush setPosition:ccp(start.x + (difx * delta), start.y + (dify * delta))];
[target begin];
[brush setColor:ccc3(0, 255, 0)];
brush.opacity = 5;
[brush visit];
[target end];
}
}
}
Hopefully it would work for you.
Its not that CCRenderTexture draws too slow its that the event only fires so often. You do need to fill in the gaps between the touch points you receive.
There is a great tutorial here about it which you may have already seen, http://www.learn-cocos2d.com/2011/12/how-to-use-ccrendertexture-motion-blur-screenshots-drawing-sketches/#sketching

glDrawArrays not drawing correctly

I made a painting program. Everything works as I expected. But while drawing, sometimes some strange things happen.
I run app, and press left mouse button on image. It should draw point from code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, brushTextura);
glPointSize(100);
glVertexPointer(2, GL_FLOAT, 0,GLVertices);
glDrawArrays(GL_POINTS, 0, count);
glDisableClientState(GL_VERTEX_ARRAY);
at point where I press. mouseDown registers mouseDown location, converts it to NSValue, sends to array, and then before drawing I extract NSValue to CGPoint and send it to GLfloat so that it could be drawn by glDrawArrays. But no matter where I click the mouse on the image it draws the point at coordinates (0,0). After that every thing works OK. See image:
This was first problem. The second problem is that when I paint with it (drag pressed mouse), sometimes points appear where they are not drawn. Image:
When I continue drag it disappears. After some dragging it appears again and disappears again. And so on. Image:
Any Ideas why it could happen? I will post code bellow:
Mouse down:
- (void) mouseDown:(NSEvent *)event
{
location = [self convertPoint: [event locationInWindow] fromView:self];
NSValue *locationValue = [NSValue valueWithPoint:location];
[vertices addObject:locationValue];
[self drawing];
}
Mouse dragged:
- (void) mouseDragged:(NSEvent *)event
{
location = [self convertPoint: [event locationInWindow] fromView:self];
NSValue *locationValue = [NSValue valueWithPoint:location];
[vertices addObject:locationValue];
[self drawing];
}
Drawing:
- (void) drawing {
int count = [vertices count] * 2;
NSLog(#"count: %d", count);
int currIndex = 0;
GLfloat *GLVertices = (GLfloat *)malloc(count * sizeof(GLfloat));
for (NSValue *locationValue in vertices) {
CGPoint loc = locationValue.pointValue;
GLVertices[currIndex++] = loc.x;
GLVertices[currIndex++] = loc.y;
}
glEnableClientState(GL_VERTEX_ARRAY);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, brushTextura);
glPointSize(100);
glVertexPointer(2, GL_FLOAT, 0, GLVertices);
glDrawArrays(GL_POINTS, 0, count);
glDisableClientState(GL_VERTEX_ARRAY);
}
You are setting your count variable (the one used in glDrawArrays) to [vertices count] * 2, which seems strange.
The last argument to glDrawArrays is the number of vertices to draw, whereas in your code it seems you are setting it to double the number (maybe you thought it's the number of floats?), which means you are just drawing rubbish after the first count vertices.
The fact the vertices aren't rendered in the exact location you clicked on should be a hint the problem is you've not properly determined the hit point within the view.
Your code has:
location = [self convertPoint: [event locationInWindow] fromView: self];
which tells the view to convert the point from its coordinates (self) to the same view's coordinates (self), even though the point is actually relative to the window.
To convert the point from the window's coordinates to the view, change that line to the following:
location = [self convertPoint: [event locationInWindow] fromView: nil];
The arguments to glDrawArrays are defined as (GLenum mode, GLint first, GLsizei count).
The second arguments defines the first index of the vertex attributes used when drawing. You're passing 1 as the first index which makes your vertex coordinates unmatch. I assume that you want 0 there.
http://www.opengl.org/sdk/docs/man/xhtml/glDrawArrays.xml

How to detect which word touched with CoreText

I am using CoreText to layout a custom view. The next step for me is to detect which word is tapped on a touch event/gesture event. I have done research on this and found advise on how to custom label a url to receive touches - but nothing generic. Does any one have any idea on how to do this?
UPDATE:
Here is the code within my drawRect: method
self.attribString = [aString copy];
CGContextRef context = UIGraphicsGetCurrentContext();
// Flip the coordinate system
CGContextSetTextMatrix(context, CGAffineTransformIdentity);
CGContextTranslateCTM(context, 0, self.bounds.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGMutablePathRef path = CGPathCreateMutable(); //1
CGPathAddRect(path, NULL, self.bounds );
CTFramesetterRef framesetter = CTFramesetterCreateWithAttributedString((__bridge CFAttributedStringRef)aString); //3
CTFrameRef frame = CTFramesetterCreateFrame(framesetter, CFRangeMake(0, [aString length]), path, NULL);
CTFrameDraw(frame, context); //4
UIGraphicsPushContext(context);
frameRef = frame;
CFRelease(path);
CFRelease(framesetter);
Here is where I am trying to handle the touch event
- (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint point = [touch locationInView:self];
CGContextRef context = UIGraphicsGetCurrentContext();
CFArrayRef lines = CTFrameGetLines(frameRef);
for(CFIndex i = 0; i < CFArrayGetCount(lines); i++)
{
CTLineRef line = CFArrayGetValueAtIndex(lines, i);
CGRect lineBounds = CTLineGetImageBounds(line, context);
NSLog(#"Line %ld (%f, %f, %f, %f)", i, lineBounds.origin.x, lineBounds.origin.y, lineBounds.size.width, lineBounds.size.height);
NSLog(#"Point (%f, %f)", point.x, point.y);
if(CGRectContainsPoint(lineBounds, point))
{
It seems that CTLineGetImageBounds is returning a wrong origin (the size seems correct) here is one example of the NSLog "Line 0 (0.562500, -0.281250, 279.837891, 17.753906)".
There is no "current context" in touchesEnded:withEvent:. You are not drawing at this point. So you can't call CTLineGetImageBounds() meaningfully.
I believe the best solution here is to use CTFrameGetLineOrigins() to find the correct line (by checking the Y origins), and then using CTLineGetStringIndexForPosition() to find the correct character within the line (after subtracting the line origin from point). This works best for simple stacked lines that run the entire view (such as yours).
Other solutions:
Calculate all the line rectangles during drawRect: and cache them. Then you can just do rectangle checks in touchesEnded:.... That's a very good solution if drawing is less common than tapping. If drawing is significantly more common than tapping, then that's a bad approach.
Do all your calculations using CTLineGetTypographicBounds(). That doesn't require a graphics context. You can use this to work out the rectangles.
In drawRect: generate a CGLayer with the current context and store it in an ivar. Then use the context from the CGLayer to calculate CTLineGetImageBounds(). The context from the CGLayer will be "compatible" with the graphics context you're using to draw.
Side note: Why are you calling UIGraphicsPushContext(context); in drawRect:? You're setting the current context to the current context. That doesn't make sense. And I don't see a corresponding UIGraphicsPopContext().

Control the animation by touch

i have this type of code....
// create a UIImageView
UIImageView *rollDiceImageMainTemp = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"rollDiceAnimationImage1.png"]];
// position and size the UIImageView
rollDiceImageMainTemp.frame = CGRectMake(0, 0, 100, 100);
// create an array of images that will represent your animation (in this case the array contains 2 images but you will want more)
NSArray *savingHighScoreAnimationImages = [NSArray arrayWithObjects:
[UIImage imageNamed:#"rollDiceAnimationImage1.png"],
[UIImage imageNamed:#"rollDiceAnimationImage2.png"],
nil];
// set the new UIImageView to a property in your view controller
self.viewController.rollDiceImage = rollDiceImageMainTemp;
// release the UIImageView that you created with alloc and init to avoid memory leak
[rollDiceImageMainTemp release];
// set the animation images and duration, and repeat count on your UIImageView
[self.viewController.rollDiceImageMain setAnimationImages:savingHighScoreAnimationImages];
[self.viewController.rollDiceImageMain setAnimationDuration:2.0];
[self.viewController.rollDiceImageMain.animationRepeatCount:3];
// start the animation
[self.viewController.rollDiceImageMain startAnimating];
// show the new UIImageView
[self.viewController.view addSubview:self.rollDiceImageMain];
Instead of startAnimation directly..there any way to control this code using touchesMoved??
Actually, what you want is the - (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event. Below is a sample of code that will allow a UIView to move on the x-axis only. Adapt to whatever you need your code to do.
- (void)touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
CGPoint original = self.center;
UITouch* touch = [touches anyObject];
CGPoint location = [touch locationInView:self.superview];
// Taking the delta will give us a nice, smooth movement that will
// track with the touch.
float delta = location.x - original.x;
// We need to substract half of the width of the view
// because we are using the view's center to reposition it.
float maxPos = self.superview.bounds.size.width
- (self.frame.size.width * 0.5f);
float minPos = self.frame.size.width * 0.5f;
float intendedPos = delta + original.x;
// Make sure they can't move the view off-screen
if (intendedPos > maxPos)
{
intendedPos = maxPos;
}
// Make sure they can't move the view off-screen
if (intendedPos < minPos)
{
intendedPos = minPos;
}
self.center = CGPointMake(intendedPos, original.y);
// We want to cancel all other touches for the view
// because we don't want the touchInside event firing.
[self touchesCancelled:touches withEvent:event];
// Pass on the touches to the super
[super touchesMoved:touches withEvent:event];
}
Notice here that I am taking the delta of the movement and not with the finger. If you track with the finger you will get very erratic behavior that is very undesirable. Applying the delta will give nice, fluid movement of the view that will track perfectly with the touch input.
Update: Also, for those wondering why I chose to multiply by 0.5f rather than divide by 2, the ARM processor doesn't support division in hardware, so there is a minuscule performance bump by going with multiplication. This performance optimization may only get called only a few times during the life of a program so it might not make a difference, but this particular message is called many, many times when dragging. Because it is in this case, it might be worth the multiplication, instead.
You can detect touches by using UIResponder methods like - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event in that you can call the [self.viewController.rollDiceImageMain startAnimating]; method.
And once it starts then you can stop after some time and restrict the touches.