Warping cursor from local coordinates when using several screens - objective-c

I'm trying to warp the mouse using NSWindow local coordinates (but I'm starting from local coordinates in px instead of pt, with the y-axis reversed).
-(void)setProperRelativeMouseLocationTo:(NSPoint)loc
{
CGFloat scale = [[m_window screen] backingScaleFactor];
NSPoint point = NSMakePoint(loc.x / scale, loc.y / scale);
point.y = [m_view frame].size.height - point.y;
NSRect rect = NSZeroRect;
rect.origin = point;
rect = [m_window convertRectToScreen:rect];
point = rect.origin;
const float screenHeight = [[m_window screen] frame].size.height;
point.y = screenHeight - point.y;
warpCursor(point);
}
void warpCursor(NSPoint loc)
{
CGPoint newCursorPosition = CGPointMake(loc.x, loc.y);
CGWarpMouseCursorPosition(newCursorPosition);
}
However, the result is unexpected on one of my screens, the x-axis is correct, but the y-axis is off by 280pt.
This value is not random, it corresponds to the gap between the two screens I'm using : the left is 1280*800 (pt) and the second one is 1920*1080 (pt) (the left one has backing scale factor of 2, while the right one has factor 1).
On the left screen, the mouse is warped exactly where it should be (if I read its local coordinates, they correspond to the ones I asked it to warp to).

Cocoa screen coordinates have their origin at the lower-left of the primary screen. Core Graphics coordinates have their origin at the top-left of the primary screen. Therefore, you have to use the primary screen's height to convert between the two.
You have:
const float screenHeight = [[m_window screen] frame].size.height;
point.y = screenHeight - point.y;
You need:
const float screenHeight = [[NSScreen screens][0] frame].size.height;
point.y = screenHeight - point.y;

Related

Drawing a Speedometer with Core Graphics on OSX in NSView

I'm trying draw elements of a Speed Gauge using Core Graphics on OSX. I've almost got it but need a little bit of help on the center ticks inside of the gauge. Here is the image of what I'm trying to do:
Here is an image of what I've got so far:
I know how to draw the circle rings and how to draw segments based around the center of the gauge like this:
- (void)drawOuterGaugeRingsInRect:(CGContextRef)contextRef rect:(NSRect)rect {
CGContextSetLineWidth(contextRef,self.gaugeRingWidth);
CGContextSetStrokeColorWithColor(contextRef, [MyColors SpeedGaugeOuterRingGray].CGColor);
CGFloat startRadians = 0;
CGFloat endRadians = M_PI*2;
CGFloat radius = self.bounds.size.width/2 - 5;
CGContextAddArc(contextRef, CGRectGetMidX(rect),CGRectGetMidY(rect),radius,startRadians,endRadians,YES);
//Render the outer gauge
CGContextStrokePath(contextRef);
//Draw the inner gauge ring.
radius -= self.gaugeRingWidth;
CGContextSetStrokeColorWithColor(contextRef, [MyColors SpeedGaugeInnerRingGray].CGColor);
CGContextAddArc(contextRef, CGRectGetMidX(rect),CGRectGetMidY(rect),radius,startRadians,endRadians,YES);
//Render the inner gauge
CGContextStrokePath(contextRef);
radius -= self.gaugeRingWidth;
//Draw and fill the gauge background
CGContextSetFillColorWithColor(contextRef, [MyColors SpeedGaugeCenterFillBlack ].CGColor);
CGContextSetStrokeColorWithColor(contextRef, [MyColors SpeedGaugeCenterFillBlack].CGColor);
CGContextAddArc(contextRef, CGRectGetMidX(rect),CGRectGetMidY(rect),radius,startRadians,endRadians,YES);
//Render and fill the gauge background
CGContextDrawPath(contextRef, kCGPathFillStroke);
/*BLUE CIRCULAR DIAL */
//Prepare to draw the blue circular dial.
radius -= self.gaugeRingWidth/2;
//Adjust gauge ring width
CGContextSetLineWidth(contextRef,self.gaugeRingWidth/2);
CGContextSetStrokeColorWithColor(contextRef, [MyColors SpeedGaugeBlue].CGColor);
CGFloat startingRadians = [MyMathHelper degressToRadians:135];
CGFloat endingRadians = [MyMathHelper degressToRadians:45];
CGContextAddArc(contextRef, CGRectGetMidX(rect),CGRectGetMidY(rect),radius,startingRadians,endingRadians,NO);
//Render the blue gauge line
CGContextStrokePath(contextRef);
}
The code above is called in the drawRect: method in my NSView
The key section is the code here:
- (void)drawInnerDividerLines:(CGContextRef)context rect:(NSRect)rect {
CGFloat centerX = CGRectGetMidX(rect);
CGFloat centerY = CGRectGetMidY(rect);
CGContextSetLineWidth (context, 3.0);
CGContextSetRGBStrokeColor (context, 37.0/255.0, 204.0/255.0, 227.0/255.0, 0.5);
CGFloat destinationX = centerX + (centerY * (cos((135)*(M_PI/180))));
CGFloat destinationY = centerY + (centerX * (sin((135)*(M_PI/180))));
NSPoint destinationPoint = NSMakePoint(destinationX, destinationY);
CGContextMoveToPoint(context, centerX, centerY);
CGContextAddLineToPoint(context, destinationPoint.x, destinationPoint.y);
CGContextStrokePath(context);
}
I understand what is going on here but the problem I'm trying to solve is drawing the little lines, off of the inner blue line that extend toward the center point of the View, but do not draw all the way to the center. I'm a little unsure on how to modify the math and drawing logic to achieve this. Here is the unit circle I based the angles off of for Core Graphics Drawing.
The main problems I'm trying to solve are:
How to define the proper starting point off of the light blue inner line as a staring point for each gauge tick. Right now, I'm drawing the full line from the center to the edge of the gauge.
How to control the length of the tick gauge as it draws pointed toward the center off of it's origin point on the blue line.
Any tips or advice that would point in me in the right direction to solve this would be appreciated.
I recommend using vectors. You can find a line to any point on the circle given an angle by calculating:
dirX = cos(angle);
dirY = sin(angle);
startPt.x = center.x + innerRadius * dirX;
startPt.y = center.y + innerRadius * dirY;
endPt.x = center.x + outerRadius * dirX;
endPt.y = center.y + outerRadius * dirY;
You can then plot a line between startPt and endPt.
Any tips or advice that would point in me in the right direction to solve this would be appreciated.
Given a point on the circumference of your circle at a certain angle around the centre you can form a right angled triangle, the radius is the hypotenuse, and the other two sides being parallel to the x & y axes (ignore for a moment the degenerate case where the point is at 0, 90, 180 or 270 deg). Given that with the sin & cos formula (remember SOHCAHTOA from school) and some basic math you can calculate the coordinates of the point, and using that draw a radius from the centre to the point.
The end points of a "tick" mark just lie on circles of different radii, so the same math will give you the end points and you can draw the tick. You just need to decide the radii of these circles, i.e. the distance along your original radius the end points of the tick should be.
HTH
Another approach to avoid the trigonometry is to rotate the transformation matrix and just draw a vertical or horizontal line.
// A vertical "line" of width "width" along the y axis at x==0.
NSRect tick = NSMakeRect(-width / 2.0, innerRadius, width, outerRadius - innerRadius);
NSAffineTransform* xform = [NSAffineTransform transform];
// Move the x and y axes to the center of your speedometer so rotation happens around it
[xform translateXBy:center.x yBy:center.y];
// Rotate the coordinate system so that straight up is actually at your speedometer's 0
[xform rotateByDegrees:135];
[xform concat];
// Create a new transform to rotate back around for each tick.
xform = [NSAffineTransform transform];
[xform rotateByDegrees:-270.0 / numTicks];
for (int i = 0; i < numTicks; i++)
{
NSRectFill(tick);
[xform concat];
}
You probably want to wrap this in [NSGraphicsContext saveGraphicsState] and [NSGraphicsContext restoreGraphicsState] so the transformation matrix is restored when you're done.
If you want two different kinds of tick marks (major and minor), then have two different rects and select one based on i % 10 == 0 or whatever. Maybe also toggle the color. Etc.

More Precise CGPoint for UILongPressGestureRecognizer

I am using a UILongPressGestureRecognizer which works perfectly but the data I get is not precise enough for my use case. CGPoints that I get are rounded off I think.
Example points that I get: 100.5, 103.0 etc. The decimal part is either .5 or .0 . Is there a way to get more precise points? I was hoping for something like .xxxx as in '100.8745' but .xx would do to.
The reason I need this is because I have a circular UIBezierPath, I want to restrict a drag gesture to only that circular path. The item should only be draggable along the circumference of this circle. To do this I calculated 720 points on the circle's boundary using it's radius. Now these points are .xxxx numbers. If I round them off, the drag is not as smooth around the middle section of the circle.This is because in the middle section, the equator, the points on the x-coordinate are very close together. So when I rounded of the y-coordinate, I lost a lot of points and hence the "not so smooth" drag action.
Here is how I calculate the points
for (CGFloat i = -154;i<154;i++) {
CGPoint point = [self pointAroundCircumferenceFromCenter:center forX:i];
[bezierPoints addObject:[NSValue valueWithCGPoint:point]];
i = i - .5;
}
- (CGPoint)pointAroundCircumferenceFromCenter:(CGPoint)center forX:(CGFloat)x
{
CGFloat radius = 154;
CGPoint upperPoint = CGPointZero;
CGPoint lowerPoint = CGPointZero;
//theta used to be the x variable. was first calculating points using the angle
/* point.x = center.x + radius * cosf(theta);
point.y = center.y + radius * sinf(theta);*/
CGFloat y = (radius*radius) - (theta*theta);
upperPoint.x = x+156;
upperPoint.y = 230-sqrtf(y);
lowerPoint.x = x+156;
lowerPoint.y = sqrtf(y)+230;
NSLog(#"x = %f, y = %f",upperPoint.x, upperPoint.y);
[lowerPoints addObject:[NSValue valueWithCGPoint:lowerPoint]];
[upperPoints addObject:[NSValue valueWithCGPoint:upperPoint]];
return upperPoint;
}
I know the code is weird I mean why would I add the points into arrays and return one point back.
Here is how I restrict the movement
-(void)handleLongPress:(UILongPressGestureRecognizer *)recognizer{
CGPoint finalpoint;
CGPoint initialpoint;
CGFloat y;
CGFloat x;
CGPoint tempPoint;
if(recognizer.state == UIGestureRecognizerStateBegan){
initialpoint = [recognizer locationInView:self.view];
CGRect rect = CGRectMake(initialpoint.x, initialpoint.y, 40, 40);
self.hourHand.frame = rect;
self.hourHand.center = initialpoint;
NSLog(#"Long Press Activated at %f,%f",initialpoint.x, initialpoint.y );
}
else if (recognizer.state == UIGestureRecognizerStateChanged){
CGPoint currentPoint = [recognizer locationInView:self.view];
x = currentPoint.x-initialpoint.x;
y = currentPoint.y-initialpoint.y;
tempPoint = CGPointMake( currentPoint.x, currentPoint.y);
NSLog(#"temp point ::%f, %f", tempPoint.x, tempPoint.y);
tempPoint = [self givePointOnCircleForPoint:tempPoint];
self.hourHand.center = tempPoint;
}
else if (recognizer.state == UIGestureRecognizerStateEnded){
// finalpoint = [recognizer locationInView:self.view];
CGRect rect = CGRectMake(tempPoint.x, tempPoint.y, 20, 20);
self.hourHand.frame = rect;
self.hourHand.center = tempPoint;
NSLog(#"Long Press DeActivated at %f,%f",tempPoint.x, tempPoint.y );
}
}
-(CGPoint)givePointOnCircleForPoint:(CGPoint) point{
CGPoint resultingPoint;
for (NSValue *pointValue in allPoints){
CGPoint pointFromArray = [pointValue CGPointValue];
if (point.x == pointFromArray.x) {
// if(point.y > 230.0){
resultingPoint = pointFromArray;
break;
// }
}
}
Basically, I taking the x-coordinate of the "touched point" and returning the y by comparing it to the array of points I calculated earlier.
Currently this code works for half a circle only because, each x has 2 y values because it's a circle, Ignore this because I think this can be easily dealt with.
In the picture, the white circle is the original circle, the black circle is the circle of the points I have from the code+formatting it to remove precision to fit the input I get. If you look around the equator(red highlighted part) you will see a gap between the next points. This gap is my problem.
To answer your original question: On a device with a Retina display, one pixel is 0.5 points, so 0.5 is the best resolution you can get on this hardware.
(On non-Retina devices, 1 pixel == 1 point.)
But it seems to me that you don't need that points array at all. If understand the problem correctly, you can use the following code to
"restrict" (or "project") an arbitrary point to the circumference of the circle:
CGPoint center = ...; // Center of the circle
CGFloat radius = ...; // Radius of the circle
CGPoint point = ...; // The touched point
CGPoint resultingPoint; // Resulting point on the circumference
// Distance from center to point:
CGFloat dist = hypot(point.x - center.x, point.y - center.y);
if (dist == 0) {
// The touched point is the circle center.
// Choose any point on the circumference:
resultingPoint = CGPointMake(center.x + radius, center.y);
} else {
// Project point to circle circumference:
resultingPoint = CGPointMake(center.x + (point.x - center.x)*radius/dist,
center.y + (point.y - center.y)*radius/dist);
}

CGRect changes width when device rotated

I want to draw a XoY coordinate axis... The axis are CGRects .. the thing is they change width when the device is rotated ... I would want them to maintain their with in all rotations. Does width 5.0 mean different things when the device is in portrait rather than landscape?
Here is the code:
CGContextSaveGState(ctx);
CGContextSetFillColorWithColor(ctx, [[UIColor whiteColor] CGColor]);
// the axis is a rect ...
//axis start point
CGFloat axisStartX = viewBounds.size.width * LEFT_EXCLUSION_LENGTH_PERCENT;
CGFloat axisStartY = viewBounds.size.height * UNDER_EXCLUSION_LENGTH_PERCENT;
CGFloat axisLength = viewBounds .size.height - (viewBounds.size.height * OVER_EXCLUSION_LENGTH_PERCENT) - viewBounds.size.height * UNDER_EXCLUSION_LENGTH_PERCENT;
CGContextAddRect(ctx, CGRectMake(axisStartX, axisStartY, AXIS_LINE_WIDTH, axisLength));
CGContextFillPath(ctx);
CGContextRestoreGState(ctx);
I would not know what you are up to with this calculation, but viewBounds.size.height may change on rotation, depending on what it represents. But if its height varies for different orientations then the axisLength will vary too. And axisLength is the height of your rect.
If you want you rect to be of same size in every orientation than do this!
CGContextSaveGState(ctx);
CGContextSetFillColorWithColor(ctx, [[UIColor whiteColor] CGColor]);
// the axis is a rect ...
//axis start point
CGFloat boxSize = MIN(viewBounds.size.width, viewBounds.size.height);
CGFloat axisStartX = boxSize * LEFT_EXCLUSION_LENGTH_PERCENT;
CGFloat axisStartY = boxSize * UNDER_EXCLUSION_LENGTH_PERCENT;
CGFloat axisLength = boxSize - (boxSize * OVER_EXCLUSION_LENGTH_PERCENT) - boxSize * UNDER_EXCLUSION_LENGTH_PERCENT;
CGContextAddRect(ctx, CGRectMake(axisStartX, axisStartY, AXIS_LINE_WIDTH, axisLength));
CGContextFillPath(ctx);
CGContextRestoreGState(ctx);
The idea here is that whatever the orientation is, the size remains same and visible on screen.
hope that helps!

iOS CATransform3D Coordinates

Would really appreciate any help on this one. I have applied a 3D transformation on a view and need to identify the edge coordinates of the rendered view so I can present another view adjacent to it (without any pixels gap). Specifically I want a series of views ("pages") to fold-up like a leaflet, by animating the angle.
int dir = (isOddNumberedPage ? 1 : -1);
float angle = 10.0;
theView.frame = CGRectMake(pageNumber * 320, 0, 320, 460);
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;
rotationAndPerspectiveTransform.m34 = -1.0 / 2000; // Perspective
rotationAndPerspectiveTransform = CATransform3DRotate(rotationAndPerspectiveTransform,
dir * angle / (180.0 / M_PI), 0.0f, 1.0f, 0.0f);
theView.layer.transform = rotationAndPerspectiveTransform;
// Now need to get the top, left, width, height of the transformed view to correct the view's left offset
I have tried a number of ways of doing this by inspecting the CALayer, a failed attempt at using some matrix maths code snippets I found, but have not been able to crack it or even get close (depending on angle, a good 20 pixels out). Is there a way I can do this without spending 2 weeks reading a matrix maths textbook?
The frame of a view is an axis-aligned rectangle in the superview's coordinate system. The frame fully encloses the view's bounds. If the view is transformed, the frame adjusts to tightly enclose the view's new bounds.
When you apply a Y-axis rotation and perspective to a view, the left and right edges of the view move toward its anchor point (which is normally the center of the view). The left edge also grows either taller or shorter, and the right edge does the opposite.
So the frame of the view (after applying the transformation) will give you the left edge coordinate and the width of the transformed view, and the top and height of the taller edge (which might be either the left or right edge). Here's my test code:
NSLog(#"frame before tilting = %#", NSStringFromCGRect(self.tiltView.frame));
float angle = 30.0;
CATransform3D rotationAndPerspectiveTransform = CATransform3DIdentity;
rotationAndPerspectiveTransform.m34 = -1.0 / 2000; // Perspective
rotationAndPerspectiveTransform = CATransform3DRotate(rotationAndPerspectiveTransform,
1 * angle / (180.0 / M_PI), 0.0f, 1.0f, 0.0f);
self.tiltView.layer.transform = rotationAndPerspectiveTransform;
NSLog(#"frame after tilting = %#", NSStringFromCGRect(self.tiltView.frame));
Here's the output:
2012-01-04 12:44:08.405 layer[72495:f803] frame before tilting = {{50, 50}, {220, 360}}
2012-01-04 12:44:08.406 layer[72495:f803] frame after tilting = {{62.0434, 44.91}, {190.67, 370.18}}
You can also get the coordinates of the corners of the view, in the superview's coordinate space using convertPoint:fromView: or convertPoint:toView:. Test code:
CGRect bounds = self.tiltView.bounds;
CGPoint upperLeft = bounds.origin;
CGPoint upperRight = CGPointMake(CGRectGetMaxX(bounds), bounds.origin.y);
CGPoint lowerLeft = CGPointMake(bounds.origin.x, CGRectGetMaxY(bounds));
CGPoint lowerRight = CGPointMake(upperRight.x, lowerLeft.y);
#define LogPoint(P) NSLog(#"%s = %# -> %#", #P, \
NSStringFromCGPoint(P), \
NSStringFromCGPoint([self.tiltView.superview convertPoint:P fromView:self.tiltView]))
LogPoint(upperLeft);
LogPoint(upperRight);
LogPoint(lowerLeft);
LogPoint(lowerRight);
Output:
2012-01-04 13:03:00.663 layer[72635:f803] upperLeft = {0, 0} -> {62.0434, 44.91}
2012-01-04 13:03:00.663 layer[72635:f803] upperRight = {220, 0} -> {252.713, 54.8175}
2012-01-04 13:03:00.663 layer[72635:f803] lowerLeft = {0, 360} -> {62.0434, 415.09}
2012-01-04 13:03:00.663 layer[72635:f803] lowerRight = {220, 360} -> {252.713, 405.182}
Notice that the Y coordinates of the upperLeft and upperRight points are different in the superview's coordinate system.

Need help to make own coordinate system (classic, center of UIView is 0,0)

I need to create my own coordinate system in UIView, where 0,0 is center of UIView. But I don't know ho to do this. Please help.
UIView *view = /*...*/;
CGContextRef ctx = /*...*/;
/* Shift center from UL corner to mid-x, mid-y. */
CGRect bounds = [view bounds];
CGFloat hx = bounds.size.width / 2.0;
CGFloat hy = bounds.size.height / 2.0;
CGContextTranslateCTM(ctx, hx, hy);
/* y still increases down, and x to the right. */
/* if you want y to increase up, then flip y: */
CGContextScaleCTM(ctx, 1.0/*sx*/, -1.0/*sy*/);
By default, (0,0) is in the upper left corner of the view. To move this point to the view's center, modify its bounds:
CGFloat width = self.bounds.size.width;
CGFloat height = self.bounds.size.height;
self.bounds = CGRectMake(-width/2.0, -height/2.0, width, height);
Make sure to repeat this calculation whenever the size of the view changes.