More Precise CGPoint for UILongPressGestureRecognizer - objective-c

I am using a UILongPressGestureRecognizer which works perfectly but the data I get is not precise enough for my use case. CGPoints that I get are rounded off I think.
Example points that I get: 100.5, 103.0 etc. The decimal part is either .5 or .0 . Is there a way to get more precise points? I was hoping for something like .xxxx as in '100.8745' but .xx would do to.
The reason I need this is because I have a circular UIBezierPath, I want to restrict a drag gesture to only that circular path. The item should only be draggable along the circumference of this circle. To do this I calculated 720 points on the circle's boundary using it's radius. Now these points are .xxxx numbers. If I round them off, the drag is not as smooth around the middle section of the circle.This is because in the middle section, the equator, the points on the x-coordinate are very close together. So when I rounded of the y-coordinate, I lost a lot of points and hence the "not so smooth" drag action.
Here is how I calculate the points
for (CGFloat i = -154;i<154;i++) {
CGPoint point = [self pointAroundCircumferenceFromCenter:center forX:i];
[bezierPoints addObject:[NSValue valueWithCGPoint:point]];
i = i - .5;
}
- (CGPoint)pointAroundCircumferenceFromCenter:(CGPoint)center forX:(CGFloat)x
{
CGFloat radius = 154;
CGPoint upperPoint = CGPointZero;
CGPoint lowerPoint = CGPointZero;
//theta used to be the x variable. was first calculating points using the angle
/* point.x = center.x + radius * cosf(theta);
point.y = center.y + radius * sinf(theta);*/
CGFloat y = (radius*radius) - (theta*theta);
upperPoint.x = x+156;
upperPoint.y = 230-sqrtf(y);
lowerPoint.x = x+156;
lowerPoint.y = sqrtf(y)+230;
NSLog(#"x = %f, y = %f",upperPoint.x, upperPoint.y);
[lowerPoints addObject:[NSValue valueWithCGPoint:lowerPoint]];
[upperPoints addObject:[NSValue valueWithCGPoint:upperPoint]];
return upperPoint;
}
I know the code is weird I mean why would I add the points into arrays and return one point back.
Here is how I restrict the movement
-(void)handleLongPress:(UILongPressGestureRecognizer *)recognizer{
CGPoint finalpoint;
CGPoint initialpoint;
CGFloat y;
CGFloat x;
CGPoint tempPoint;
if(recognizer.state == UIGestureRecognizerStateBegan){
initialpoint = [recognizer locationInView:self.view];
CGRect rect = CGRectMake(initialpoint.x, initialpoint.y, 40, 40);
self.hourHand.frame = rect;
self.hourHand.center = initialpoint;
NSLog(#"Long Press Activated at %f,%f",initialpoint.x, initialpoint.y );
}
else if (recognizer.state == UIGestureRecognizerStateChanged){
CGPoint currentPoint = [recognizer locationInView:self.view];
x = currentPoint.x-initialpoint.x;
y = currentPoint.y-initialpoint.y;
tempPoint = CGPointMake( currentPoint.x, currentPoint.y);
NSLog(#"temp point ::%f, %f", tempPoint.x, tempPoint.y);
tempPoint = [self givePointOnCircleForPoint:tempPoint];
self.hourHand.center = tempPoint;
}
else if (recognizer.state == UIGestureRecognizerStateEnded){
// finalpoint = [recognizer locationInView:self.view];
CGRect rect = CGRectMake(tempPoint.x, tempPoint.y, 20, 20);
self.hourHand.frame = rect;
self.hourHand.center = tempPoint;
NSLog(#"Long Press DeActivated at %f,%f",tempPoint.x, tempPoint.y );
}
}
-(CGPoint)givePointOnCircleForPoint:(CGPoint) point{
CGPoint resultingPoint;
for (NSValue *pointValue in allPoints){
CGPoint pointFromArray = [pointValue CGPointValue];
if (point.x == pointFromArray.x) {
// if(point.y > 230.0){
resultingPoint = pointFromArray;
break;
// }
}
}
Basically, I taking the x-coordinate of the "touched point" and returning the y by comparing it to the array of points I calculated earlier.
Currently this code works for half a circle only because, each x has 2 y values because it's a circle, Ignore this because I think this can be easily dealt with.
In the picture, the white circle is the original circle, the black circle is the circle of the points I have from the code+formatting it to remove precision to fit the input I get. If you look around the equator(red highlighted part) you will see a gap between the next points. This gap is my problem.

To answer your original question: On a device with a Retina display, one pixel is 0.5 points, so 0.5 is the best resolution you can get on this hardware.
(On non-Retina devices, 1 pixel == 1 point.)
But it seems to me that you don't need that points array at all. If understand the problem correctly, you can use the following code to
"restrict" (or "project") an arbitrary point to the circumference of the circle:
CGPoint center = ...; // Center of the circle
CGFloat radius = ...; // Radius of the circle
CGPoint point = ...; // The touched point
CGPoint resultingPoint; // Resulting point on the circumference
// Distance from center to point:
CGFloat dist = hypot(point.x - center.x, point.y - center.y);
if (dist == 0) {
// The touched point is the circle center.
// Choose any point on the circumference:
resultingPoint = CGPointMake(center.x + radius, center.y);
} else {
// Project point to circle circumference:
resultingPoint = CGPointMake(center.x + (point.x - center.x)*radius/dist,
center.y + (point.y - center.y)*radius/dist);
}

Related

MKOverlayRenderer gets cut off when rendering MKOverlay but fixed by zooming out

I'm new to iOS development and I'm struggling with porting some code from iOS6 involving the use of MKOverlay.
When the overlay radius or coordinate change, the renderer should update the display accordingly in real time.
This part works, but if I drag the overlay too much, it reaches some boundary and the rendering gets cut off. I can't find any documentation or help on this behavior.
In the CircleOverlayRenderer class:
- (id)initWithOverlay:(id<MKOverlay>)overlay
{
self = [super initWithOverlay:overlay];
if (self) {
CircleZone *bOverlay = (CircleZone *)overlay;
[RACObserve(bOverlay, coordinate) subscribeNext:^(id x) {
[self setNeedsDisplay];
}];
[RACObserve(bOverlay, radius) subscribeNext:^(id x) {
[self setNeedsDisplay];
}];
}
return self;
}
- (void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
{
CGRect rect = [self rectForMapRect:[self.overlay boundingMapRect]];
CGContextSaveGState(context);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextSetFillColorSpace(context, colorSpace);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextSetFillColor(context, color);
CGContextSetAllowsAntialiasing(context, YES);
// outline
{
CGContextSetAlpha(context, 0.8);
CGContextFillEllipseInRect(context, rect);
}
// red
{
CGContextSetAlpha(context, 0.5);
CGRect ellipseRect = CGRectInset(rect, 0.01 * rect.size.width / 2, 0.01 * rect.size.height / 2);
CGContextFillEllipseInRect(context, ellipseRect);
}
CGContextRestoreGState(cox);
}
In the CircleOverlay class:
- (MKMapRect)boundingMapRect
{
MKMapPoint center = MKMapPointForCoordinate(self.coordinate);
double mapPointsPerMeter = MKMapPointsPerMeterAtLatitude(self.coordinate.latitude);
double mapPointsRadius = _radius * mapPointsPerMeter;
return MKMapRectMake(center.x - mapPointsRadius, center.y - mapPointsRadius,
mapPointsRadius * 2.0, mapPointsRadius * 2.0);
}
Here are some screen shots of the problem I'm seeing:
Problem when dragging overlay too much:
Problem when changing the radius:
The problem does go away if I keep zooming the map out. After the map tiles refresh, the overlay no longer gets cut off...
If anyone had a similar problem, please help me, it's driving me crazy!
Looking at the radius example, it makes me suspect the boundingMapRect, given how its cropping. Looking at the boundingMapRect implementation, the reliance upon MKMapPointsPerMeterAtLatitude (esp when you're looking at a large region) is worrying. That function is useful if you are, for example, trying to figure out where a coordinate 10 meters from some other coordinate, but when looking at really large spans, it doesn't always work out well.
I might, instead, suggest something that gets the MKCoordinateRegion of where the circle is, and then convert that to MKMapRect. A simplistic implementation might look like:
- (MKMapRect)boundingMapRect {
MKCoordinateRegion region = MKCoordinateRegionMakeWithDistance(self.coordinate, _radius * 2, _radius * 2);
CLLocationCoordinate2D upperLeftCoordinate = CLLocationCoordinate2DMake(region.center.latitude - region.span.latitudeDelta / 2, region.center.longitude - region.span.longitudeDelta / 2);
CLLocationCoordinate2D lowerRightCoordinate = CLLocationCoordinate2DMake(region.center.latitude + region.span.latitudeDelta / 2, region.center.longitude + region.span.longitudeDelta / 2);
MKMapPoint upperLeft = MKMapPointForCoordinate(upperLeftCoordinate);
MKMapPoint lowerRight = MKMapPointForCoordinate(lowerRightCoordinate);
return MKMapRectMake(MIN(upperLeft.x, lowerRight.x),
MIN(upperLeft.y, lowerRight.y),
ABS(upperLeft.x - lowerRight.x),
ABS(upperLeft.y - lowerRight.y));
}
You'll have to tweak with this to make sure it gracefully handles crossing of the 180th meridian and when the circle encompasses the north pole, but it illustrates the basic idea: Get MKCoordinateRegion for the circle and then convert that to MKMapRect.

Warping cursor from local coordinates when using several screens

I'm trying to warp the mouse using NSWindow local coordinates (but I'm starting from local coordinates in px instead of pt, with the y-axis reversed).
-(void)setProperRelativeMouseLocationTo:(NSPoint)loc
{
CGFloat scale = [[m_window screen] backingScaleFactor];
NSPoint point = NSMakePoint(loc.x / scale, loc.y / scale);
point.y = [m_view frame].size.height - point.y;
NSRect rect = NSZeroRect;
rect.origin = point;
rect = [m_window convertRectToScreen:rect];
point = rect.origin;
const float screenHeight = [[m_window screen] frame].size.height;
point.y = screenHeight - point.y;
warpCursor(point);
}
void warpCursor(NSPoint loc)
{
CGPoint newCursorPosition = CGPointMake(loc.x, loc.y);
CGWarpMouseCursorPosition(newCursorPosition);
}
However, the result is unexpected on one of my screens, the x-axis is correct, but the y-axis is off by 280pt.
This value is not random, it corresponds to the gap between the two screens I'm using : the left is 1280*800 (pt) and the second one is 1920*1080 (pt) (the left one has backing scale factor of 2, while the right one has factor 1).
On the left screen, the mouse is warped exactly where it should be (if I read its local coordinates, they correspond to the ones I asked it to warp to).
Cocoa screen coordinates have their origin at the lower-left of the primary screen. Core Graphics coordinates have their origin at the top-left of the primary screen. Therefore, you have to use the primary screen's height to convert between the two.
You have:
const float screenHeight = [[m_window screen] frame].size.height;
point.y = screenHeight - point.y;
You need:
const float screenHeight = [[NSScreen screens][0] frame].size.height;
point.y = screenHeight - point.y;

Radian Angle Check in Box2d Objective-C

I am creating a Car Game in Objective C with Box2D. I want to rotate the Car Wheels according to a CCSprite (Steering) Rotation.
So For this, i am successfully able to set the angle for CCSprite Steering according to touch. But when i am trying to get the Value of Steer rotation then Its not getting perfect result, although Steer is rotating perfectly. But Radian Angle value is not perfect some times. So i am not able to rotate wheels as per Steering rotation
My Code is to set the Steering Angle is as below
On Touch Begin, setting Starting angle
-(void)getAngleSteer:(UITouch *) touch
{
CGPoint location = [touch locationInView: [touch view]];
location = [[CCDirector sharedDirector] convertToGL: location];
float adjacent = steering.position.x - location.x;
float opposite = steering.position.y - location.y;
self.startAngleForSteer = atan2f(adjacent, opposite);
}
On Touch Move, Moving Steering and Setting Car Angle
-(void)rotateSteer:(UITouch *) touch
{
CGPoint location = [touch locationInView: [touch view]];
location = [[CCDirector sharedDirector] convertToGL: location];
float adjacent = steering.position.x - location.x;
float opposite = steering.position.y - location.y;
float angle = atan2f(adjacent, opposite);
angle = angle - self.startAngleForSteer;
steering.rotation = CC_RADIANS_TO_DEGREES(angle);
//Main issue is in below line
NSLog(#"%f %f", angle, CC_RADIANS_TO_DEGREES(angle));
if(angle > M_PI/3 || angle < -M_PI/3) return;
steering_angle = -angle;//This is for Box2D Car Angle
}
This is the Log, when moving Steer, anticlock wise
-0.127680 -7.315529
-0.212759 -12.190166
-0.329367 -18.871363
5.807306 332.734131 // Why is 5.80 cuming just after -0.32 it should be decrease.
5.721369 327.810303
5.665644 324.617462
Finally Got the answer for this. Thanks to Box2D forum (http://box2d.org/forum/viewtopic.php?f=5&t=4726)
For Objective C we have to normalize the angle like below function.
-(float) normalizeAngle:(float) angle
{
CGFloat result = (float)((int) angle % (int) (2*M_PI));
return (result) >= 0 ? (angle < M_PI) ? angle : angle - (2*M_PI) : (angle >= -M_PI) ? angle : angle + (2*M_PI);
}
And just call it before the condition check M_PI/3
steering.rotation = CC_RADIANS_TO_DEGREES(angle);
angle = [self normalizeAngle:angle];//This is where to call.
if(angle > M_PI/3 || angle < -M_PI/3) return;
steering_angle = -angle;

Detect closer point inside a view from touch

There's a way to, when the user touches outside the view, the app detects the closer point inside this view? I want to detect just like the image below.
EDIT:
CGPoint touchPoint = [[touches anyObject] locationInView:self.view];
if (CGRectContainsPoint([_grayView frame], touchPoint)) {
// The touch was inside the gray view
} else {
// The touch was outside the view. Detects where's the closer CGPoint inside the gray view.
// The detection must be related to the whole view (In the image example, the CGPoint returned would be related to the whole screen)
}
static float bound(float pt, float min, float max)
{
if(pt < min) return min;
if(pt > max) return max;
return pt;
}
static CGPoint boundPoint(CGPoint touch, CGRect bounds)
{
touch.x = bound(touch.x, bounds.origin.x, bounds.origin.x + bounds.size.width;
touch.y = bound(touch.y, bounds.origin.y, bounds.origin.y + bounds.size.height;
return touch;
}
All you need is a little math:
Ask the touch for its locationInView: with the view in question as the argument.
Compare the point’s x with the view’s bounds, clamping to the extrema of that CGRect.
There is no step three, the result of the above is the point you are looking for.
Try this code!
CGPoint pt = [touches locationInView:childView];
if(pt.x >= 0 && pt.x <= childView.frame.size.width
&& pt.y >= 0 && pt.y <= childView.frame.size.height) {
NSLog(#"Touch inside rect");
return;
}
pt.x = MIN(childView.frame.size.width, MAX(0, pt.x));
pt.y = MIN(childView.frame.size.height, MAX(0, pt.y));
// and here is the point
NSLog(#"The closest point is %f, %f", pt.x, pt.y);

Zoom Layer centered on a Sprite

I am in process of developing a small game where a space-ship travels through a layer (doh!), in some situations the spaceship comes close to an enemy, and the whole layer is zoomed in on the space-ship with the zoom level being dependent on the distance between the ship and the enemy. All of this works fine.
The main question, however, is how do I keep the zoom being centered on the space-ship?
Currently I control the zooming in the GameLayer object through the update method, here is the code:
-(void) prepareLayerZoomBetweenSpaceship{
CGPoint mainSpaceShipPosition = [mainSpaceShip position];
CGPoint enemySpaceShipPosition = [enemySpaceShip position];
float distance = powf(mainSpaceShipPosition.x - enemySpaceShipPosition.x, 2) + powf(mainSpaceShipPosition.y - enemySpaceShipPosition.y,2);
distance = sqrtf(distance);
/*
Distance > 250 --> no zoom
Distance < 100 --> maximum zoom
*/
float myZoomLevel = 0.5f;
if(distance < 100){ //maximum zoom in
myZoomLevel = 1.0f;
}else if(distance > 250){
myZoomLevel = 0.5f;
}else{
myZoomLevel = 1.0f - (distance-100)*0.0033f;
}
[self zoomTo:myZoomLevel];
}
-(void) zoomTo:(float)zoom {
if(zoom > 1){
zoom = 1;
}
// Set the scale.
if(self.scale != zoom){
self.scale = zoom;
}
}
Basically my question is: How do I zoom the layer and center it exactly between the two ships? I guess this is like a pinch zoom with two fingers!
Below is some code that should get it working for you. Basically you want to:
Update your ship positions within the parentNode's coordinate system
Figure out which axis these new positions will cause the screen will be bound by.
Scale and re-position the parentNode
I added some sparse comments, but let me know if you have any more questions/issues. It might be easiest to dump this in a test project first...
ivars to put in your CCLayer:
CCNode *parentNode;
CCSprite *shipA;
CCSprite *shipB;
CGPoint destA, deltaA;
CGPoint destB, deltaB;
CGPoint halfScreenSize;
CGPoint fullScreenSize;
init stuff to put in your CCLayer:
CGSize size = [[CCDirector sharedDirector] winSize];
fullScreenSize = CGPointMake(size.width, size.height);
halfScreenSize = ccpMult(fullScreenSize, .5f);
parentNode = [CCNode node];
[self addChild:parentNode];
shipA = [CCSprite spriteWithFile:#"Icon-Small.png"]; //or whatever sprite
[parentNode addChild:shipA];
shipB = [CCSprite spriteWithFile:#"Icon-Small.png"];
[parentNode addChild:shipB];
//schedules update for every frame... might not run great.
//[self schedule:#selector(updateShips:)];
//schedules update for 25 times a second
[self schedule:#selector(updateShips:) interval:0.04f];
Zoom / Center / Ship update method:
-(void)updateShips:(ccTime)timeDelta {
//SHIP POSITION UPDATE STUFF GOES HERE
...
//1st: calc aspect ratio formed by ship positions to determine bounding axis
float shipDeltaX = fabs(shipA.position.x - shipB.position.x);
float shipDeltaY = fabs(shipA.position.y - shipB.position.y);
float newAspect = shipDeltaX / shipDeltaY;
//Then: scale based off of bounding axis
//if bound by x-axis OR deltaY is negligible
if (newAspect > (fullScreenSize.x / fullScreenSize.y) || shipDeltaY < 1.0) {
parentNode.scale = fullScreenSize.x / (shipDeltaX + shipA.contentSize.width);
}
else { //else: bound by y-axis or deltaX is negligible
parentNode.scale = fullScreenSize.y / (shipDeltaY + shipA.contentSize.height);
}
//calculate new midpoint between ships AND apply new scale to it
CGPoint scaledMidpoint = ccpMult(ccpMidpoint(shipA.position, shipB.position), parentNode.scale);
//update parent node position (move it into view of screen) to scaledMidpoint
parentNode.position = ccpSub(halfScreenSize, scaledMidpoint);
}
Also, I'm not sure how well it'll perform with a bunch of stuff going on -- but thats a separate problem!
Why don't you move the entire view, & position it so the ship is in the centre of the screen? I haven't tried it with your example, but it should be straight forward. Maybe something like this -
CGFloat x = (enemySpaceShipPosition.x - mainSpaceShipPosition.x) / 2.0 - screenCentreX;
CGFloat y = (enemySpaceShipPosition.y - mainSpaceShipPosition.y) / 2.0 - screenCentreY;
CGPoint midPointForContentOffset = CGPointMake(-x, -y);
[self setContentOffset:midPointForContentOffset];
...where you've already set up screenCentreX & Y. I haven't used UISCrollView for quite a while (been working on something in Unity so I'm forgetting all by Obj-C), & I can't remember how the contentOffset is affected by zoom level. Try it & see! (I'm assuming you're using a UIScrollView, maybe you could try that too if you're not)