Touch node selection doesn't work cocos2d objective c - objective-c

i'm working on a node selection procedure.
i have an array of CCDrawNode objects and i need to do something with the selected node
I do something like this :
CGPoint TOUCHLOC = [touch locationInView: [touch view]];
for (int j=0; j<=[theArray count]-1; j++)
{
node = [theArray objectAtIndex:j];
if (CGRectContainsPoint(node.boundingBox, TOUCHLOC))
{
return node;
}
}
Also tried something like this:
float left = node.boundingBox.origin.x - node.boundingBox.size.width;
float right = node.boundingBox.origin.x + node.boundingBox.size.width;
float top = node.boundingBox.origin.y + node.boundingBox.size.height;
float bottom = node.boundingBox.origin.y - node.boundingBox.size.height;
CGRect r = CGRectMake(left, bottom, right - left, top - bottom);
OR
CGRect r = CGRectMake(left, bottom, top - bottom , right - left );
if (CGRectContainsPoint(r, TOUCHLOC))
{
return node;
}
This procedure was working perfect, but i updated the framework and the method
[touch locationInWorld]; is deprecated
Now is
[touch locationInView: [touch view]];
This does't work correctly sometimes selects an incorrect node or none.
Does someone knows the correct way to select the node
Thanks in advance!

Related

How can I have the same sprite in multiple locations dynamically?

How can I have the same sprite in multiple locations dynamically? I have already seen the other question, but, you can only do that with three sprites. I want to have a dynamic number of sprites. My objective is that I am trying to make, instead of shooting only one bullet, I want it to shoot three or more. I have all of the math done, but, I need to draw the three sprites in a for-loop. Here is what I have so far.
- (void)ccTouchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch * touch = [touches anyObject];
CGPoint pointOne = [touch locationInView:[touch view]];
CGSize size = [[CCDirector sharedDirector] winSize];
CGPoint position = turret.position;
CGFloat degrees = angleBetweenLinesInDegrees(position, pointOne);
turret.rotation = degrees;
pointOne.y = size.height-pointOne.y;
CCSprite *projectile = [CCSprite spriteWithFile:#"projectile.png"];
projectile.position = turret.position;
// Determine offset of location to projectile
int angle = angleBetweenLinesInDegrees(position, pointOne);
int startAngle = angle-15;
int shots = 3;
NSMutableArray *projectiles = [[NSMutableArray alloc] initWithCapacity:shots];
// Ok to add now - we've double checked position
for(int i = 0;i<shots;i++) {
[self addChild:projectile z:1];
int angleToShoot = angle;
int x = size.width;
int y = x*tan(angle);
CGPoint realDest = ccp(x,y);
projectile.tag = 2;
if (paused==0 ) {
[_projectiles addObject:projectile];
// Move projectile to actual endpoint
[projectile runAction:
[CCSequence actions:
[CCMoveTo actionWithDuration:1 position:realDest],
[CCCallBlockN actionWithBlock:^(CCNode *node) {
[_projectiles removeObject:node];
[node removeFromParentAndCleanup:YES];
}],
nil]];
}
}
}
This gives me the error: 'NSInternalInconsistencyException', reason: 'child already added. It can't be added again'
you need to create 3 different sprite and add all 3 of them as a child.
usually for doing stuff like this is better to use a CCBatchNode (take a look to the cocos doc).
With a batchnode you get all the childs be drawn in 1 draw call with the only constrain that all the childs of the batchnode needs to have the texture on the same spriteSheet (or in your case if they have the same "filename")
for just 3 projectiles you wont have performance problems but its the correct way to design it, if you will need to have dozens of projectiles on screen without using a batchnode the game wont run smooth.
to recap:
create a ccbatchnode,
add the batchnode as a child of self (i suppose its ur layer or main node)
create 3 sprites and add them as a child of the batchnode

Detect closer point inside a view from touch

There's a way to, when the user touches outside the view, the app detects the closer point inside this view? I want to detect just like the image below.
EDIT:
CGPoint touchPoint = [[touches anyObject] locationInView:self.view];
if (CGRectContainsPoint([_grayView frame], touchPoint)) {
// The touch was inside the gray view
} else {
// The touch was outside the view. Detects where's the closer CGPoint inside the gray view.
// The detection must be related to the whole view (In the image example, the CGPoint returned would be related to the whole screen)
}
static float bound(float pt, float min, float max)
{
if(pt < min) return min;
if(pt > max) return max;
return pt;
}
static CGPoint boundPoint(CGPoint touch, CGRect bounds)
{
touch.x = bound(touch.x, bounds.origin.x, bounds.origin.x + bounds.size.width;
touch.y = bound(touch.y, bounds.origin.y, bounds.origin.y + bounds.size.height;
return touch;
}
All you need is a little math:
Ask the touch for its locationInView: with the view in question as the argument.
Compare the point’s x with the view’s bounds, clamping to the extrema of that CGRect.
There is no step three, the result of the above is the point you are looking for.
Try this code!
CGPoint pt = [touches locationInView:childView];
if(pt.x >= 0 && pt.x <= childView.frame.size.width
&& pt.y >= 0 && pt.y <= childView.frame.size.height) {
NSLog(#"Touch inside rect");
return;
}
pt.x = MIN(childView.frame.size.width, MAX(0, pt.x));
pt.y = MIN(childView.frame.size.height, MAX(0, pt.y));
// and here is the point
NSLog(#"The closest point is %f, %f", pt.x, pt.y);

CCTMXTiledMap touch detection is a bit off

I used the tile tutorial from www.raywenderlich.com. I set up a tile map in a landscape view and used the code from the tutorial to detect touch, like this:
CGPoint touchLocation = [touch locationInView: [touch view]];
touchLocation = [[CCDirector sharedDirector] convertToGL: touchLocation];
touchLocation = [self convertToNodeSpace:touchLocation];
And use this bit to figure which tile it is on:
CGPoint touchTilePos = [self tileCoordForPosition:touchLocation];
int tileIndex = [self tileIndexForTileCoord:(touchTilePos)];
CCSprite *touchTile = [self.background tileAt:touchTilePos];
self.player.position = [touchTile.parent convertToWorldSpace:touchTile.position];
Problem is that it's a bit off. Touch close to the left side is fairly close, but touch close to the right side is detected way off to the left. Seems like some sort of scaling issue, but the only scaling in my code is the player sprite.
Any ideas? Suggestions?
Any halp is greatly appreciated!
EDIT: here is the two methods referenced in the code:
- (CGPoint) tileCoordForPosition:(CGPoint)position
{
int x = position.x / _tileMap.tileSize.width;
int y = ((_tileMap.mapSize.height * _tileMap.tileSize.height) - position.y) / _tileMap.tileSize.height;
return ccp(x, y);
}
- (int) tileIndexForTileCoord:(CGPoint)aTileCoord
{
return aTileCoord.y*(self.tileMap.mapSize.width) + aTileCoord.x;
}
UPDATE:
I simplified the code:
CGPoint touchLocation = [touch locationInView: [touch view]];
touchLocation = [[CCDirector sharedDirector] convertToGL: touchLocation];
touchLocation = [self.background convertToNodeSpace:touchLocation];
int x = touchLocation.x / _tileMap.tileSize.width;
int y = ((_tileMap.mapSize.height * _tileMap.tileSize.height) - touchLocation.y) / _tileMap.tileSize.height;
CGPoint touchTilePos = ccp(x,y);
Also, i noticed that right off the start, touchLocation is off to the left, i don't think this code works correctly for my case (iPad/landscape):
CGPoint touchLocation = [touch locationInView: [touch view]];
I made a fatal mistake! I am using HEX tiles, and they overlap a bit (about 1/4). I have to account for that! And that's why the effect gets worse as i tap further to the right,
Here is the code to fix it:
int x = (touchLocation.x - tileSize.width/8) / (_tileMap.tileSize.width * (3.0/4.0)) ;
int y = 0;
if(x % 2 == 0)
{
// even
y = ((_tileMap.mapSize.height * tileSize.height) - touchLocation.y) / tileSize.height;
}
else
{
// odd
y = ((_tileMap.mapSize.height * tileSize.height) - (touchLocation.y + tileSize.height/2)) / tileSize.height;
}
CGPoint touchTilePos = ccp(x,y);
Cheers!
The tutorial code looks like it should work fine. I use code that's mathematically the same with consistent/correct results.
My only suggestion is to make doubly sure that the self object in this line:
touchLocation = [self convertToNodeSpace:touchLocation]; is the actual, immediate parent node of your TMXMap.
In my code, the CCLayer is not the immediate parent, so I end up explicitly tagging my map's parent node so I can grab it like this:
CCNode *parentNode = [self getChildByTag:kTagParentNode];
CGPoint worldPt = [parentNode convertToNodeSpace:touchLocation];

Drag & Resize UIView using Touches, a working solution asking for feedback

In my app I have a view I want to resize using a two fingers touch similar but not close to what the pinch gesture recognizer provides detection for.
The idea is similar to what you would do on the desktop by grabbing one of the four corners with the mouse, except that I want a more "touch friendly" interface, where the amount by which each such corner shrinks or grows are independent in both horizontal and vertical amounts: that's where I depart from pinch as the pinch's scale factor is the same for both X and Y, which is not what I want.
What I want is to detect two such fingers and resize/move the view as appriopriate.
And I have succeeded.
The idea I used (in addition to dealing with the half persistence of UITouch objects ...) was to deem the last moving finger as the "target", with the previous moving one as the "anchor".
I then compute a vector from anchor to target that points to one of the four corners, (it always does even when on an axis) allowing me to expand/shrink the width/height of the view at the same time as moving or not its origin, giving the effect that you can resize the top and left (origin change required) as well as the width/height only (origin left alone) or a combination of both.
To determine how much I need to shrink/grow/offset I use the difference between the current target point and the previous target point. In other words, the vector is used to determine which corner I am pointing to, and thus which "quadrant" the touch operates in, thus allowing me to chose which of x, y, width or height to alter, and the target current/previous position tell me by how much.
There are two problems, both of which I can live with, but I am wondering if anyone has gone the extra mile.
The user experience is great except for the slightly unnatural feeling which results from resizing the top right corner using a gesture where both fingers reside in the bottom left corner. This does exactly what the finger motion dictates, but feels a bit like the "spooky action at a distance". Maybe I just need to get used to it? I am failing to think of how to amend the gesture to achieve something more natural.
The math. Kind of ugly. I wanted to use an affine transform but failed to see how I could apply it to my problem, so I resorted to the old trig|ck of arcsine/arccosine, and then "switched" on the vector direction to determine which "quadrant"(of some hypothetical unit circle, only related to the relative position of anchor and target, irrespective of where they are in the view, -- hence problem#1 --).
so the questions summary:
Is there a better, user friendlier approach that would make the
drag/resize effect more consistent with where the fingers are within
the view?
Would an affine transform make the code cleaner? how?
The code.
A: wrapping UITouches
#interface UITouchWrapper : NSObject
#property (assign, nonatomic) CGPoint centerOffset ;
#property (assign, nonatomic) UITouch * touch ;
#end
#implementation UITouchWrapper
#synthesize centerOffset ;
#synthesize touch ;
- (void) dealloc {
::NSLog(#"letting go of %#", self.touch) ;
}
#end
B. UITouch handling
#property (strong, nonatomic) NSMutableArray * touchesWrapper ;
#synthesize touchesWrapper ;
- (UITouchWrapper *) wrapperForTouch: (UITouch *) touch {
for (UITouchWrapper * w in self.touchesWrapper) {
if (w.touch == touch) {
return w ;
}
}
UITouchWrapper * w = [[UITouchWrapper alloc] init] ;
w.touch = touch ;
[self.touchesWrapper addObject:w] ;
return w ;
}
- (void) releaseWrapper: (UITouchWrapper *) wrapper {
[self.touchesWrapper removeObject:wrapper] ;
}
- (NSUInteger) wrapperCount {
return [self.touchesWrapper count] ;
}
C: touch began
- (void) touchesBegan:(NSSet *) touches withEvent:(UIEvent *)event {
// prime (possibly) our touch references. Touch events are unrelated ...
for (UITouch * touch in [touches allObjects]) {
// created on the fly if required
UITouchWrapper * w = [self wrapperForTouch:touch] ;
CGPoint p = [touch locationInView:[self superview]] ;
p.x -= self.center.x ;
p.y -= self.center.y ;
w.centerOffset = p ;
}
}
D: finding 'the other' point (anchor)
- (UITouch *) anchorTouchFor: (UITouch *) touch {
NSTimeInterval mostRecent = 0.0f ;
UITouch * anchor = nil ;
for (UITouchWrapper * w in touchesWrapper) {
if (w.touch == touch) {
continue ;
}
if (mostRecent < w.touch.timestamp) {
mostRecent = w.touch.timestamp ;
anchor = w.touch ;
}
}
return anchor ;
}
E: detecting a drag (= single touch move)
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *) event {
CGRect frame = self.frame ;
for (UITouch * touch in [touches allObjects]) {
UITouchWrapper * w = [self wrapperForTouch:touch] ;
if ([self wrapperCount] == 1) {
// that's a drag. w.touch and touch MUST agree
CGPoint movePoint = [touch locationInView:[self superview]] ;
CGPoint center = self.center ;
center.x = movePoint.x - w.centerOffset.x ;
center.y = movePoint.y - w.centerOffset.y ;
self.center = center ;
CGPoint p = movePoint ;
p.x -= self.center.x ;
p.y -= self.center.y ;
w.centerOffset = p ;
[self setNeedsDisplay] ;
// ...
}
}
}
F: computing the angle [0 .. 2 pi] of the vector anchor:touch
- (float) angleBetween: (UITouch *) anchor andTouch: (UITouch *) touch {
// the coordinate sysem is flipped along the Y-axis...
CGPoint a = [anchor locationInView:[self superview]] ;
CGPoint t = [touch locationInView:[self superview]] ;
// swap a and t to compensate for the flipped coordinate system;
CGPoint d = CGPointMake(t.x-a.x, a.y-t.y) ;
float distance = sqrtf(d.x * d.x + d.y * d.y) ;
float cosa = (t.x - a.x) / distance ;
float sina = (a.y - t.y) / distance ;
float rc = ::acosf(cosa) ;
float rs = ::asinf(sina) ;
return rs >= 0.0f ? rc : (2.0f * M_PI) - rc ;
}
G: handling the resize:
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *) event {
CGRect frame = self.frame ;
// ...
// That's a resize. We need to determine the direction of the
// move. It is given by the vector made of this touch and the other
// touch. But if we have more than 2 touches, we use the one whose
// time stamp is closest to this touch.
UITouch * anchor = [self anchorTouchFor:touch] ;
// don't do anything if we cannot find an anchor
if (anchor == nil) return ;
CGPoint oldLoc = [touch previousLocationInView:[self superview]] ;
CGPoint newLoc = [touch locationInView:[self superview]] ;
CGPoint p = newLoc ;
p.x -= self.center.x ;
p.y -= self.center.y ;
w.centerOffset = p ;
CGFloat dx = newLoc.x - oldLoc.x ;
CGFloat dy = newLoc.y - oldLoc.y ;
float angle = [self angleBetween:anchor andTouch:touch] ;
if (angle >= M_PI + M_PI_2) { // 270 .. 360 bottom right
frame.size.width += dx ;
frame.size.height += dy ;
} else if (angle >= M_PI) { // 180 .. 270 bottom left
frame.size.width -= dx ;
frame.size.height += dy ;
frame.origin.x += dx ;
} else if (angle >= M_PI_2) { // 90 .. 180 top left
frame.size.width -= dx ;
frame.origin.x += dx ;
frame.size.height -= dy ;
frame.origin.y += dy ;
} else { // 0 .. 90 top right
frame.size.width += dx ;
frame.size.height -= dy ;
frame.origin.y += dy ;
}
// ...
self.frame = frame ;
[self setNeedsLayout] ;
[self setNeedsDisplay] ;
H: cleanup on touchesEnded/touchesCancelled
for (UITouch * touch in [touches allObjects]) {
UITouchWrapper * w = [self wrapperForTouch:touch] ;
if (w.touch == touch) {
[self releaseWrapper:w] ;
}
}

Best way to save/load stroke data of the sample GLPaint app?

I noticed that by default the sample app GLPaint comes with a recorded data in binary and it loads up on start, it is what is used to display the "Shake me" text.
I'm creating a similar painting app and I was just wondering what is the best way to actually record these strokes and load them up next time.
Currently I tried saving the location of every vertex using Core Data, then upon start it reads and renders all the points again, however I found this too slow.
Is there a better/more efficient method of doing this? Can the entire viewBuffer be saved as binary and then loaded back up?
If you look at the Recording.data you will notice that each line is its own array. To capture the ink and play it back you need to have an array of arrays. For purposes of this demo - declare a mutable array - writRay
#synthesize writRay;
//init in code
writRay = [[NSMutableArray alloc]init];
Capture the ink
// Handles the continuation of a touch.
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
CGRect bounds = [self bounds];
UITouch* touch = [[event touchesForView:self] anyObject];
// Convert touch point from UIView referential to OpenGL one (upside-down flip)
if (firstTouch) {
firstTouch = NO;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
/******************* create a new array for this stroke's points **************/
[writRay addObject:[[NSMutableArray alloc]init]];
/***** add 1st point *********/
[[writRay objectAtIndex:[writRay count] -1]addObject:[NSValue valueWithCGPoint:previousLocation]];
} else {
location = [touch locationInView:self];
location.y = bounds.size.height - location.y;
previousLocation = [touch previousLocationInView:self];
previousLocation.y = bounds.size.height - previousLocation.y;
/********* add additional points *********/
[[writRay objectAtIndex:[writRay count] -1]addObject:[NSValue valueWithCGPoint:previousLocation]];
}
// Render the stroke
[self renderLineFromPoint:previousLocation toPoint:location];
}
Playback the ink.
- (void)playRay{
if(writRay != NULL){
for(int l = 0; l < [writRay count]; l++){
//replays my writRay -1 because of location point
for(int p = 0; p < [[writRay objectAtIndex:l]count] -1; p ++){
[self renderLineFromPoint:[[[writRay objectAtIndex:l]objectAtIndex:p]CGPointValue] toPoint:[[[writRay objectAtIndex:l]objectAtIndex:p + 1]CGPointValue]];
}
}
}
}
For best effect shake the screen to clear and call playRay from changeBrushColor in the AppController.