I'm currently making an application which draws out a certain network of signs to the screen (on a CGContextRef). So far everything is going great, but now i'm finding myself in the situation that i can't solve this problem:
I'm trying to draw an object dynamically knowing only the line its on (have the start and ending point's x and y coordinates). With these i found the middle of the line, this is where the symbol should be drawn. With this information i found the angle of the line (with the top as 0). This is the information i have right now:
CGPoint firstLocation;
CGPoint secondLocation;
CGPoint middleLocation;
double x1 = firstLocation.x;
double y1 = firstLocation.y;
double x2 = middleLocation.x;
double y2 = middleLocation.y;
float a = (atan2(y2-y1, x2-x1) * (180/M_PI)) - 90;
I looked at using some transform function (like CGAffineTransform) on a CGRect, but this doesn't seem to work as i need to rotate the rect around it's center and a CGRect would only rotate around it's origin.
I want to create the following symbols with the above information:
Any help is appreciated, and if you need any more information please tell me!
In my app I do something similar. I have a path that I add a transform to before drawing. The transform shifts the path to the midpoint, rotates it, and shifts it back:
// Rotate the path such that it points to the end coordinate
CGAffineTransform t = CGAffineTransformTranslate(
CGAffineTransformRotate(
CGAffineTransformMakeTranslation(middleLocation.x, middleLocation.y),
-a),
-middleLocation.x, -middleLocation.y);
CGMutablePathRef path = CGPathCreateMutable();
CGPoint points[8] = { ... these are the 8 points in my path ... };
CGPathAddLines(path, &t, points, 8);
You don't have to use CGPathAddLines, that was just the easiest way for me to construct the path. All of the CGPathAdd... functions can take a transform.
If you're not using CGPath, you could do a similar transform in the context itself by doing CGContextTranslateCTM and CGContextRotateCTM.
Related
I'm trying to make a SpriteKit game where the player can drag groups of sprites around, but I can't figure out how to get the sprite to follow the mouse cursor. Using the SpriteKit boilerplate, I get this:
Here is the relevant logic for how I move the "Hello, world!" sprite in the SKNode babies
SKNode *babies;
-(void)mouseDown:(NSEvent *)theEvent {
dragStart = [theEvent locationInWindow];
babiesStart = babies.position;
}
-(void)mouseDragged:(NSEvent *)theEvent {
CGPoint translation = CGPointMake([theEvent locationInWindow].x - dragStart.x,
[theEvent locationInWindow].y - dragStart.y);
float adjust = 1.0;
babies.position = CGPointMake(babiesStart.x + translation.x * adjust,
babiesStart.y + translation.y * adjust);
}
I've tried a number of different methods, such as deltaX and Y on theEvent but I get the same result. The only solution I've found is to play with the adjust variable, but that's clearly a hack.
NSEvent has another method in SpriteKit, - (CGPoint)locationInNode:(SKNode *)node. By using this I was able to get correct offset values for move the SKNode along with the mouse.
please try this [theEvent locationInNode:self];
if you don't accumulate delta in mouseDragged, the loss is inevitable.
In my case the following works quite ok.
mouseDown()
{
previous = event.location(in: self)
}
mouseDragged()
{
current = event.location(in: self)
...
delta += (current - previous)
previous = current
...
}
update()
{
...
use up your delta
delta = 0
}
cheers
My guess is that the issue is with coordinate spaces. You're performing calculations based on -[NSEvent locationInWindow] which is, of course, in the window coordinate system. In what coordinate system is babies.position? It's at least in a view's coordinate system, although maybe SprikeKit also imposes another coordinate space.
To convert the point to the view's coordinate space, you will want to use NSPoint point = [theView convertPoint:[NSEvent locationInWindow] fromView:nil];. To convert the point from the view's coordinate space to the scene's, you'd use CGPoint cgpoint = [theScene convertPointFromView:NSPointToCGPoint(point)];. If babies is not the scene object, then to convert to the coordinate system used by babies.position, you'd do cgpoint = [babies.parent convertPoint:cgpoint fromNode:scene];. You'd then compute translation by taking the difference between babiesStart and cgpoint.
Update: actually, you wouldn't compare the result with babiesStart as such. You'd compare it with the result of the same coordinate transformation done on the original cursor location. So, you'd compute dragStart similar to how you'd compute cgpoint. Later, you'd take the difference between those.
This is normal behavior.
When you take the mouse position as reported by the event, some time passes before the Sprite Kit view is redrawn, at which point the cursor has already moved to a new position.
There isn't much you can do about it, except maybe predict the position for very fast movement by factoring in the previous mouse events distances and thus predicting where the next position is likely going to be, and then take the actual mouse position and adjust it a little according to the most recent general movement direction.
Usually this is overkill though.
How can I accept touch input beyond the scene's bounds, so that no matter what I set self.position to, touches can still be detected?
I'm creating a tile based game from Ray Winderlich on Cocos2d version 3.0. I am at the point of setting the view of the screen to a zoomed in state on my tile map. I have successfully been able to do that although now my touches are not responding since I'm out of the coordinate space the touches used to work on.
This method is called to set the zoomed view to the player's position:
-(void)setViewPointCenter:(CGPoint)position{
CGSize winSize = [CCDirector sharedDirector].viewSizeInPixels;
int x = MAX(position.x, winSize.width/2);
int y = MAX(position.y, winSize.height/2);
x = MIN(x, (_tileMap.mapSize.width * _tileMap.tileSize.width) - winSize.width / 2);
y = MIN(y, (_tileMap.mapSize.height * _tileMap.tileSize.height) - winSize.height / 2);
CGPoint actualPosition = ccp(x, y);
CGPoint centerOfView = ccp(winSize.width/2, winSize.height/2);
NSLog(#"centerOfView%#", NSStringFromCGPoint(centerOfView));
CGPoint viewPoint = ccpSub(centerOfView, actualPosition);
NSLog(#"viewPoint%#", NSStringFromCGPoint(viewPoint));
//This changes the position of the helloworld layer/scene so that
//we can see the portion of the tilemap we're interested in.
//That however makes my touchbegan method stop firing
self.position = viewPoint;
}
This is what the NSLog prints from the method:
2014-01-30 07:05:08.725 TestingTouch[593:60b] centerOfView{512, 384}
2014-01-30 07:05:08.727 TestingTouch[593:60b] viewPoint{0, -832}
As you can see the y coordinate is -800. If i comment out the line self.position = viewPoint then the self.position reads {0, 0} and touches are detectable again but then we don't have a zoomed view on the character. Instead it shows the view on the bottom left of the map.
Here's a video demonstration.
How can I fix this?
Update 1
Here is the github page to my repository.
Update 2
Mark has been able to come up with a temporary solution so far by setting the hitAreaExpansion to a large number like so:
self.hitAreaExpansion = 10000000.0f;
This will cause touches to respond again all over! However, if there is a solution that would not require me to set the property with an absolute number then that would be great!
-edit 3-(tldr version):
setting the contentsize of the scene/layer to the size of the tilemap solves this issue:
[self setContentSize: self.tileMap.contentSize];
original replies below:
You would take the touch coordinate and subtract the layer position.
Generally something like:
touchLocation = ccpSub(touchLocation, self.position);
if you were to scale the layer, you would also need appropriate translation for that as well.
-edit 1-:
So, I had a chance to take another look, and it looks like my 'ridiculous' number was not ridiculous enough, or I had made another change. Anyway, if you simply add
self.hitAreaExpansion = 10000000.0f; // I'll let you find a more reasonable number
the touches will now get registered.
As for the underlying issue, I believe it to be one of content scale that is not set correctly, but again, I'll now leave that to you. I did however find out that when looking through some of the tilemap class, that tilesize is said to be in pixels, not points, which I guess is somehow related to this.
-edit 2-:
It bugged me with the sub-optimal answer, so I looked a little further. Forgive me, I hadn't looked at v3 until I saw this question. :p
after inspecting the base class and observing the scene/layer's value of:
- (BOOL)hitTestWithWorldPos:(CGPoint)pos;
it became obvious that the content size of the scene/layer was being set to the current view size, which in the case of an iPad is (1024, 768)
The position of the layer after the setViewPointCenter call is fully above the initial view's position, hence, the touch was being suppressed. by setting the layer/scene contentSize to the size of the tilemap, the touchable area is now expanded over the entire map, which allows the node to process the touch.
I am having trouble understanding some of the math in the following tutorial:
Sprite Kit Tutorial
I am not sure how to comprehend offset. About half way through the tutorial, Ray uses the following code:
UITouch * touch = [touches anyObject];
CGPoint location = [touch locationInNode:self];
// 2 - Set up initial location of projectile
SKSpriteNode * projectile = [SKSpriteNode spriteNodeWithImageNamed:#"projectile"];
projectile.position = self.player.position;
// 3- Determine offset of location to projectile
CGPoint offset = rwSub(location, projectile.position);
where rwSub is
static inline CGPoint rwSub(CGPoint a, CGPoint b) {
return CGPointMake(a.x - b.x, a.y - b.y);
}
I know this code works, but I don't understand it. I tried NSLogging the touch point and the offset point, and they do not form a triangle like it shows in the picture:
(source: raywenderlich.com)
This is what I got from my output:
Touch Location
X: 549.000000 Y: 154.000000
Offset
X: 535.500000 Y: -6.000000
This does not form a vector in the correct direction..but it still works?
Is anyone able to explain how the offset works?
Offset is the difference from the ninja, and the point you have touched. So the touch you logged is, 535 pts to the right, and 6 pts down (-6).
So it is going in the correct direction, relative to the player.
The tutorial also forces the ninja star to travel offscreen, via
// 6 - Get the direction of where to shoot
CGPoint direction = rwNormalize(offset);
// 7 - Make it shoot far enough to be guaranteed off screen
CGPoint shootAmount = rwMult(direction, 1000);
// 8 - Add the shoot amount to the current position
CGPoint realDest = rwAdd(shootAmount, projectile.position);
Draw some pictures, it will help you understand.
The offset in this case simply represent the location of the touch related to the character, and allow you to know where the projectile will be aimed.
In the tutorial, on the next lines you can see :
// 4 - Bail out if you are shooting down or backwards
if (offset.x <= 0) return;
In this example offset.x < 0 means that the projectile is targeting something behind the ninja on the x axis, where 0 is the x-coordinate of the character.
The idea here is to translate the projectile's target coordinates in the character's own referential to understand better their positions to each other.
I have multiple UIImageView's within a UIView. These UIImageViews have been transformed.
When the gesture is applied to any one of the UIImageView's the overlapping imageViews should get displaced in such a way that the move outside the UIImageView's frame and we get a full view of the image (all of this would be done in an animating manner).
Also please note that the images will have been transformed so I wont be able to use the frame properties.
Right now I'm using CGRectIntersectsRect but its not giving me the desired effect.
After struggling a lot, finally i achieved the desired result with this algorithm.
I calculate the list of overlapping UIImageView's and store it in an array.
I am using CGRectIntersectsRect for this.
Note : This will also include the UIImageView on which the gesture was applied (say target ImageView).
For each of these overlapping UIImageView, i perform the following steps :
1) Calculate the angle between the overlapping ImageView and target
ImageView. This will give me the direction to shift the Overlapping ImageView.
2) Now shift the overlapping ImageView by some constant distance(say len), such that it maintains the same angle. Direction should be such that it moves away from the target ImageView.
3) You can calculate the new x,y co-ordinates using Sin/Cos functions.
x = start_x + len * cos(angle);
y = start_y + len * sin(angle);
NOTE: For ImageViews whose center is less than your target ImageView's center, you will need to subtract the value.
4) Now shift the overlapping ImageView to the new calculated center.
5) Continue shifting it until the views do not intersect any more.
I am attaching my code. I hope it helps.
-(void)moveImage:(UIImageView *)viewToMove fromView:(UIImageView
*)imageToSendBack
{
CGFloat angle = angleBetweenLines(viewToMove.center, imageToSendBack.center);
CGFloat extraSpace = 50;
CGRect oldBounds = viewToMove.bounds;
CGPoint oldCenter = viewToMove.center;
CGPoint shiftedCenter;
while(CGRectIntersectsRect(viewToMove.frame,imageToSendBack.frame))
{
CGPoint startPoint = viewToMove.center;
shiftedCenter.x = startPoint.x - (extraSpace * cos(angle));
if(imageToSendBack.center.y < viewToMove.center.y)
shiftedCenter.y = startPoint.y + extraSpace * sin(angle);
else
shiftedCenter.y = startPoint.y - extraSpace * sin(angle);
viewToMove.center = shiftedCenter;
}
viewToMove.bounds = oldBounds;
viewToMove.center = oldCenter;
[self moveImageAnimationBegin:viewToMove toNewCenter:shiftedCenter];
}
I didn't try this, but I would do it like this:
After getting the bool from CGRectIntersectsRect, multiply this overlapping UIImageView x and y by the width and height of it.
And since you can't use frame property after changing the transformation, you will need to multiply the width and height by 1.5 for the maximum amount of rotation.
Warning: If the transform property is not the identity transform, the
value of this property is undefined and therefore should be ignored.
This might move some view father than other, but you will be certain all views will be moved outside the UIImageView's frame.
If you try it and it doesn't work, post some code and I will help you with it.
EDIT: For views that their x,y less than your UIImageView's x,y you will need to subtract the offset.
This has really been a hassle, but I have two sprites, both of them are 17 pixel long arms. Both of them have anchor points at ccp(0.5f,0.0f); and what I want is for, when arm1 rotates, for arm2's CGPoint to be equal to the opposite end of the anchorpoint of arm1. Like, at a 45 degree angle, the CGPoint would just be ccp (arm1.position.y, arm1.position.x + 17);
So I update the rotation for arm1 in my ccTime function, and it calls another method to do the math for the angle rotation. Basicall what happens is... arm2 rotates reaaaally fast around in the correct circular area, meaning something is right, but the rotation is just super fast.
-(void) callEveryFrame:(ccTime)dt{
//if the user is holding down on the screen, arm1 rotates.
if(held == true){
_theArm2.position = [self fixAngle];
timer = timer + .0166; //this gets reset after touchesEnded.
_theArm1.rotation = _theArm1.rotation - (timer * 10);
}
-(CGPoint)fixAngle{
CGFloat theAngle = _theArm1.rotation;
//I'm not to sure how the degrees works in cocos2d, so I added 90 to the angle of rotation's original position, and it works until the rotation variable changes.
CGFloat thyAngle = theAngle + 90;
CGFloat theMathx = (17*cosf(thyAngle)); //17 is the length of the arm
CGFloat theMathy = (17*sinf(thyAngle));
theMathx = theMathx + 100;//these 2 updates just change the position, because arm1's
theMathy = theMathy + 55; //CGpoint is located at ccp(100,57)
return CGPointMake(theMathx, theMathy);
}
Sorry if the code is... bad. I'm relatively new to programming, but everything works except the stupid arm2 likes to rotate really fast in a circle.
I will love whoever solves my problem for the rest of my/their lives.
EDIT:
Per discussion on this thread, it looks like you are using degree where radians should be used, but not where I thought. Try this:
CGFloat theMathx = (17*cosf(CC_DEGREES_TO_RADIANS(thyAngle))); //17 is the length of the arm
CGFloat theMathy = (17*sinf(CC_DEGREES_TO_RADIANS(thyAngle)));
Try using this to add 90 degrees to the rotation. Cocos2D uses radians instead of degrees: