I have a draggable view that I have set up with the following code:
#import <UIKit/UIKit.h>
#interface DraggableView : UIImageView {
CGPoint startLocation;
}
#end
#import "DraggableView.h"
#implementation DraggableView
- (id) initWithImage: (UIImage *) anImage
{
if (self = [super initWithImage:anImage])
self.userInteractionEnabled = YES;
return self;
}
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
// Calculate and store offset, and pop view into front if needed
CGPoint pt = [[touches anyObject] locationInView:self];
startLocation = pt;
[[self superview] bringSubviewToFront:self];
}
- (void) touchesMoved:(NSSet*)touches withEvent:(UIEvent*)event
{
// Calculate offset
CGPoint pt = [[touches anyObject] locationInView:self];
float dx = pt.x - startLocation.x;
float dy = pt.y - startLocation.y;
CGPoint newcenter = CGPointMake(self.center.x + dx, self.center.y + dy);
// Set new location
self.center = newcenter;
}
How do I go about snapping this view to a grid? From a broad viewpoint, I understand that I could offset the new location in a touchesEnded method call. However, I am hitting a brick wall when I try to implement this.
Thanks in advance for any assistance with this issue.
In touchesMoved, before applying newcenter to your view, round it to your grid step size:
float step = 10.0; // Grid step size.
newcenter.x = step * floor((newcenter.x / step) + 0.5);
newcenter.y = step * floor((newcenter.y / step) + 0.5);
This will cause your view to "snap" as you drag it.
although the code in jnic's answer is semantically equivalent to the code below, i think this code is more elegant.
here it is in pseudocode (javaish)
int gridCubeWidth = 3; //for instance
int gridCubeHeight = 3;
int newX = Math.Round(oldX / gridCubeWidth) * gridCubeWidth;
int newY = Math.Round(oldY / gridCubeHeight) * gridCubeHeight;
using this example, a point on x like x=4 should map to 3 but x=5 should map to 6.
Related
So I generated an SKShapeNode, and need to know when that node is clicked. I do so by calling:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [touches anyObject];
CGPoint positionInScene = [touch locationInNode:self];
SKNode *node = [self nodeAtPoint:positionInScene];
if ([node.name isEqualToString:TARGET_NAME]) {
// do whatever
}
}
}
So the result I'm getting is pretty weird. Clicking the dot itself does in fact work. However, pressing anywhere on the screen that is southwest of the SKShapeNode's position will also render the above code as true.
With the SKShapeNode represented by the red dot, any UITouch in the shaded region would render my code above as true.
Here is how I am building the SKShapeNode. It may also be important to note that my application runs in landscape mode.
#define RANDOM_NUMBER(min, max) (arc4random() % (max - min) + min)
- (SKShapeNode *)makeNodeWithName:(NSString *)name color:(UIColor *)color
{
SKShapeNode *circle = [SKShapeNode new];
int maxXCoord = self.frame.size.width;
int maxYCoord = self.frame.size.height;
CGFloat x = RANDOM_NUMBER((int)TARGET_RADIUS, (int)(maxXCoord - TARGET_RADIUS));
CGFloat y = RANDOM_NUMBER((int)TARGET_RADIUS, (int)(maxYCoord - TARGET_RADIUS - 15));
circle.fillColor = color;
circle.strokeColor = color;
circle.path = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(x, y, TARGET_RADIUS, TARGET_RADIUS)].CGPath;
circle.name = name;
return circle;
}
Thanks for any help!
This happens because the circle node's position is at origin, and it draws the path in a rect starting at (x,y). So the node's frame is stretched to encompass everything between (0,0) to (x+TARGET_RADIUS, y+TARGET_RADIUS).
You can check this out for yourself, by visualizing the circle's frame:
SKSpriteNode *debugFrame = [SKSpriteNode spriteNodeWithColor:[NSColor yellowColor] size:circle.frame.size];
debugFrame.anchorPoint = CGPointMake(0, 0);
debugFrame.position = circle.frame.origin;
debugFrame.alpha = 0.5f;
[self addChild:test];
This reveals the actual clickable region (on OSX):
To fix your issue, try this:
circle.path = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(-TARGET_RADIUS/2.0f, -TARGET_RADIUS/2.0f, TARGET_RADIUS, TARGET_RADIUS)].CGPath;
and add
circle.position = CGPointMake(x, y);
In my app I have a view I want to resize using a two fingers touch similar but not close to what the pinch gesture recognizer provides detection for.
The idea is similar to what you would do on the desktop by grabbing one of the four corners with the mouse, except that I want a more "touch friendly" interface, where the amount by which each such corner shrinks or grows are independent in both horizontal and vertical amounts: that's where I depart from pinch as the pinch's scale factor is the same for both X and Y, which is not what I want.
What I want is to detect two such fingers and resize/move the view as appriopriate.
And I have succeeded.
The idea I used (in addition to dealing with the half persistence of UITouch objects ...) was to deem the last moving finger as the "target", with the previous moving one as the "anchor".
I then compute a vector from anchor to target that points to one of the four corners, (it always does even when on an axis) allowing me to expand/shrink the width/height of the view at the same time as moving or not its origin, giving the effect that you can resize the top and left (origin change required) as well as the width/height only (origin left alone) or a combination of both.
To determine how much I need to shrink/grow/offset I use the difference between the current target point and the previous target point. In other words, the vector is used to determine which corner I am pointing to, and thus which "quadrant" the touch operates in, thus allowing me to chose which of x, y, width or height to alter, and the target current/previous position tell me by how much.
There are two problems, both of which I can live with, but I am wondering if anyone has gone the extra mile.
The user experience is great except for the slightly unnatural feeling which results from resizing the top right corner using a gesture where both fingers reside in the bottom left corner. This does exactly what the finger motion dictates, but feels a bit like the "spooky action at a distance". Maybe I just need to get used to it? I am failing to think of how to amend the gesture to achieve something more natural.
The math. Kind of ugly. I wanted to use an affine transform but failed to see how I could apply it to my problem, so I resorted to the old trig|ck of arcsine/arccosine, and then "switched" on the vector direction to determine which "quadrant"(of some hypothetical unit circle, only related to the relative position of anchor and target, irrespective of where they are in the view, -- hence problem#1 --).
so the questions summary:
Is there a better, user friendlier approach that would make the
drag/resize effect more consistent with where the fingers are within
the view?
Would an affine transform make the code cleaner? how?
The code.
A: wrapping UITouches
#interface UITouchWrapper : NSObject
#property (assign, nonatomic) CGPoint centerOffset ;
#property (assign, nonatomic) UITouch * touch ;
#end
#implementation UITouchWrapper
#synthesize centerOffset ;
#synthesize touch ;
- (void) dealloc {
::NSLog(#"letting go of %#", self.touch) ;
}
#end
B. UITouch handling
#property (strong, nonatomic) NSMutableArray * touchesWrapper ;
#synthesize touchesWrapper ;
- (UITouchWrapper *) wrapperForTouch: (UITouch *) touch {
for (UITouchWrapper * w in self.touchesWrapper) {
if (w.touch == touch) {
return w ;
}
}
UITouchWrapper * w = [[UITouchWrapper alloc] init] ;
w.touch = touch ;
[self.touchesWrapper addObject:w] ;
return w ;
}
- (void) releaseWrapper: (UITouchWrapper *) wrapper {
[self.touchesWrapper removeObject:wrapper] ;
}
- (NSUInteger) wrapperCount {
return [self.touchesWrapper count] ;
}
C: touch began
- (void) touchesBegan:(NSSet *) touches withEvent:(UIEvent *)event {
// prime (possibly) our touch references. Touch events are unrelated ...
for (UITouch * touch in [touches allObjects]) {
// created on the fly if required
UITouchWrapper * w = [self wrapperForTouch:touch] ;
CGPoint p = [touch locationInView:[self superview]] ;
p.x -= self.center.x ;
p.y -= self.center.y ;
w.centerOffset = p ;
}
}
D: finding 'the other' point (anchor)
- (UITouch *) anchorTouchFor: (UITouch *) touch {
NSTimeInterval mostRecent = 0.0f ;
UITouch * anchor = nil ;
for (UITouchWrapper * w in touchesWrapper) {
if (w.touch == touch) {
continue ;
}
if (mostRecent < w.touch.timestamp) {
mostRecent = w.touch.timestamp ;
anchor = w.touch ;
}
}
return anchor ;
}
E: detecting a drag (= single touch move)
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *) event {
CGRect frame = self.frame ;
for (UITouch * touch in [touches allObjects]) {
UITouchWrapper * w = [self wrapperForTouch:touch] ;
if ([self wrapperCount] == 1) {
// that's a drag. w.touch and touch MUST agree
CGPoint movePoint = [touch locationInView:[self superview]] ;
CGPoint center = self.center ;
center.x = movePoint.x - w.centerOffset.x ;
center.y = movePoint.y - w.centerOffset.y ;
self.center = center ;
CGPoint p = movePoint ;
p.x -= self.center.x ;
p.y -= self.center.y ;
w.centerOffset = p ;
[self setNeedsDisplay] ;
// ...
}
}
}
F: computing the angle [0 .. 2 pi] of the vector anchor:touch
- (float) angleBetween: (UITouch *) anchor andTouch: (UITouch *) touch {
// the coordinate sysem is flipped along the Y-axis...
CGPoint a = [anchor locationInView:[self superview]] ;
CGPoint t = [touch locationInView:[self superview]] ;
// swap a and t to compensate for the flipped coordinate system;
CGPoint d = CGPointMake(t.x-a.x, a.y-t.y) ;
float distance = sqrtf(d.x * d.x + d.y * d.y) ;
float cosa = (t.x - a.x) / distance ;
float sina = (a.y - t.y) / distance ;
float rc = ::acosf(cosa) ;
float rs = ::asinf(sina) ;
return rs >= 0.0f ? rc : (2.0f * M_PI) - rc ;
}
G: handling the resize:
- (void) touchesMoved:(NSSet *)touches withEvent:(UIEvent *) event {
CGRect frame = self.frame ;
// ...
// That's a resize. We need to determine the direction of the
// move. It is given by the vector made of this touch and the other
// touch. But if we have more than 2 touches, we use the one whose
// time stamp is closest to this touch.
UITouch * anchor = [self anchorTouchFor:touch] ;
// don't do anything if we cannot find an anchor
if (anchor == nil) return ;
CGPoint oldLoc = [touch previousLocationInView:[self superview]] ;
CGPoint newLoc = [touch locationInView:[self superview]] ;
CGPoint p = newLoc ;
p.x -= self.center.x ;
p.y -= self.center.y ;
w.centerOffset = p ;
CGFloat dx = newLoc.x - oldLoc.x ;
CGFloat dy = newLoc.y - oldLoc.y ;
float angle = [self angleBetween:anchor andTouch:touch] ;
if (angle >= M_PI + M_PI_2) { // 270 .. 360 bottom right
frame.size.width += dx ;
frame.size.height += dy ;
} else if (angle >= M_PI) { // 180 .. 270 bottom left
frame.size.width -= dx ;
frame.size.height += dy ;
frame.origin.x += dx ;
} else if (angle >= M_PI_2) { // 90 .. 180 top left
frame.size.width -= dx ;
frame.origin.x += dx ;
frame.size.height -= dy ;
frame.origin.y += dy ;
} else { // 0 .. 90 top right
frame.size.width += dx ;
frame.size.height -= dy ;
frame.origin.y += dy ;
}
// ...
self.frame = frame ;
[self setNeedsLayout] ;
[self setNeedsDisplay] ;
H: cleanup on touchesEnded/touchesCancelled
for (UITouch * touch in [touches allObjects]) {
UITouchWrapper * w = [self wrapperForTouch:touch] ;
if (w.touch == touch) {
[self releaseWrapper:w] ;
}
}
I'm working with Mixare AR SDK for iOS and I need to solve some bugs that happends, one of them is show the information of a POI when the POI's view is tapped.
Prelude:
Mixare has an overlay UIView within MarkerView views are placed, MarkerView views are moving around the screen to geolocate the POIs and each one has two subviews, an UIImageView and an UILabel.
Issue:
Now, for example, there are 3 visible POIs in the screen, so there are 3 MarkerView as overlay subviews. If you touch anywhere in the overlay, a info view associated to a random POI of which are visible is showed.
Desired:
I want that the associated POI's info is shown only when the user tapped a MarkerView
Let's work. I've see that MarkerView inherits from UIView and implements hitTest:withEvent
- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event {
viewTouched = (MarkerView*)[super hitTest:point withEvent:event];
return self;
}
I've put a breakpoint and hitTest is called once for each visible MarkerView but loadedView always is null so I can't work with it, so I've tried to check if the hit point is inside the MarkerView frame implementing pointInside:withEvent: by this way
- (BOOL)pointInside:(CGPoint)point withEvent:(UIEvent *)event
{
NSLog(#"ClassName: %#", [[self class] description]);
NSLog(#"Point Inside: %f, %f", point.x, point.y);
NSLog(#"Frame x: %f y: %f widht:%f height:%f", self.frame.origin.x, self.frame.origin.y, self.frame.size.width, self.frame.size.height);
if (CGRectContainsPoint(self.frame, point))
return YES;
else
return NO;
return YES;
}
But this function always returns NO, even when I touch the MarkerView. When I check the log I saw that X and Y point values has negative values sometimes and width and height of the view are very small, 0.00022 or similar instead of 100 x 150 that I set the MarkerView frame on its initialization.
Here you are a extract of my log in which you can see the class name, the point and the MarkerView frame values.
ClassName: MarkerView
2011-12-29 13:20:32.679 paisromanico[2996:707] Point Inside: 105.224899, 49.049023
2011-12-29 13:20:32.683 paisromanico[2996:707] Frame x: 187.568573 y: 245.735138 widht:0.021862 height:0.016427
I'm very lost with this issue so any help will be welcome. Thanks in advance for any help provided and I'm sorry about this brick :(
Edit:
At last I've found that the problem is not in hitTest:withEvent: or pointInside:withEvent, problem is with CGTransform that applies to the MarkerView for scaling based on distande and rotating the view, if I comment any code related to this, the Mixare AR SDK works fine, I mean, info view is shown correctly if you touch a marker and doesn't do anything if any other place in the screen is touched.
So, by the moment, I've not solved the problem but I applied a patch removing the CGTransform related code in AugmentedViewController.m class - (void)updateLocations:(NSTimer *)timer function
- (void)updateLocations:(NSTimer *)timer {
//update locations!
if (!ar_coordinateViews || ar_coordinateViews.count == 0) {
return;
}
int index = 0;
NSMutableArray * radarPointValues= [[NSMutableArray alloc]initWithCapacity:[ar_coordinates count]];
for (PoiItem *item in ar_coordinates) {
MarkerView *viewToDraw = [ar_coordinateViews objectAtIndex:index];
viewToDraw.tag = index;
if ([self viewportContainsCoordinate:item]) {
CGPoint loc = [self pointInView:ar_overlayView forCoordinate:item];
CGFloat scaleFactor = 1.5;
if (self.scaleViewsBasedOnDistance) {
scaleFactor = 1.0 - self.minimumScaleFactor * (item.radialDistance / self.maximumScaleDistance);
}
float width = viewToDraw.bounds.size.width ;//* scaleFactor;
float height = viewToDraw.bounds.size.height; // * scaleFactor;
viewToDraw.frame = CGRectMake(loc.x - width / 2.0, loc.y-height / 2.0, width, height);
/*
CATransform3D transform = CATransform3DIdentity;
//set the scale if it needs it.
if (self.scaleViewsBasedOnDistance) {
//scale the perspective transform if we have one.
transform = CATransform3DScale(transform, scaleFactor, scaleFactor, scaleFactor);
}
if (self.rotateViewsBasedOnPerspective) {
transform.m34 = 1.0 / 300.0;
double itemAzimuth = item.azimuth;
double centerAzimuth = self.centerCoordinate.azimuth;
if (itemAzimuth - centerAzimuth > M_PI) centerAzimuth += 2*M_PI;
if (itemAzimuth - centerAzimuth < -M_PI) itemAzimuth += 2*M_PI;
double angleDifference = itemAzimuth - centerAzimuth;
transform = CATransform3DRotate(transform, self.maximumRotationAngle * angleDifference / (VIEWPORT_HEIGHT_RADIANS / 2.0) , 0, 1, 0);
}
viewToDraw.layer.transform = transform;
*/
//if we don't have a superview, set it up.
if (!(viewToDraw.superview)) {
[ar_overlayView addSubview:viewToDraw];
[ar_overlayView sendSubviewToBack:viewToDraw];
}
} else {
[viewToDraw removeFromSuperview];
viewToDraw.transform = CGAffineTransformIdentity;
}
[radarPointValues addObject:item];
index++;
}
float radius = [[[NSUserDefaults standardUserDefaults] objectForKey:#"radius"] floatValue];
if(radius <= 0 || radius > 100){
radius = 5.0;
}
radarView.pois = radarPointValues;
radarView.radius = radius;
[radarView setNeedsDisplay];
[radarPointValues release];
}
Any CoreGrapics or UI expert could give us his point of view about this issue??
You should either try to hittest as attached:
if ([self pointInside:point withEvent:event]) {
// do something
}
I would suggest you add the hit test on the superview, and do the following in the hit test of the parent of the markerViews
if ([markerView pointInside:point withEvent:event]) {
// extract the tag and show the relevant info
}
Hope this helps
I'm trying to spin an circle image based on user swipe.Now I've done by considering the as two parts. one is the left side and other is the right side.
If the user swipes down from right half means it rotates clockwise and swipe up means anti clock wise. In left side I've done the vice versa. So now my image will rotate fine only when I touch the left and right half.. on touching top and bottom .. its behaving differently.
I've even tried ny calculating the radians.. its also not working
Can any one suggest me to identify clockwise or anticlock wise in a better way...
Thank u,
Lakshmi jones
You should approach this problem with trignometry. Assuming you know the starting point of swipe (a1,b1) and the ending point of swipe (a2,b2)
The circles centre is at (x,y)
If we know the difference of angles made by lines (x,y)->(a1,b1) and (x,y)->(a2,b2) we will know whether to rotate clockwise or anticlockwise based on whether the above said angle is positive or negative.
Angle made by a line is calculated below.
Let the red angle be red
if(a1-x==0){
if(b1-y>0) red=pi/2
else red = 3*pi/2
}
else{
tan(red) = abs((b1-y)/(a1-x))
red = tan-inverse( abs((b1-y)/(a1-x)) )
if(a1-x<0){
if(b1-y<=0)
red+=pi;
else
red+=pi/2
}
else if(a1-x>0 && b1-y<0){
red+=3*pi/2
}
}
See here to know how to calculate tan-inverse.
Similarly calculate the value of angle green. After doing that just comparing the value of green and red will let you know what to do.
if(red - green == pi || red - green == 0){
do_nothing();
}else if(red - green > 0){
rotate_clockwise();
}else{
rotate_anticlockwise();
}
By using the acceleration/velocity data of the swipe you could rotate the circle with the same acceleration/velocity.
Have you ever tried this tutorial for your problem. This will help you for sure.
The .h file for calculating the swipe
#import <UIKit/UIKit.h>
#import "SMRotaryProtocol.h"
#interface SMRotaryWheel : UIControl
#property (weak) id <SMRotaryProtocol> delegate;
#property (nonatomic, strong) UIView *container;
#property int numberOfSections;
#property CGAffineTransform startTransform;
#property (nonatomic, strong) NSMutableArray *cloves;
#property int currentValue;
- (id) initWithFrame:(CGRect)frame andDelegate:(id)del withSections:(int)sectionsNumber;
And the .m file is
#import "SMRotaryWheel.h"
#import <QuartzCore/QuartzCore.h>
#import "SMCLove.h"
#interface SMRotaryWheel()
- (void)drawWheel;
- (float) calculateDistanceFromCenter:(CGPoint)point;
- (void) buildClovesEven;
- (void) buildClovesOdd;
- (UIImageView *) getCloveByValue:(int)value;
- (NSString *) getCloveName:(int)position;
#end
static float deltaAngle;
static float minAlphavalue = 0.6;
static float maxAlphavalue = 1.0;
#implementation SMRotaryWheel
#synthesize delegate, container, numberOfSections, startTransform, cloves, currentValue;
- (id) initWithFrame:(CGRect)frame andDelegate:(id)del withSections:(int)sectionsNumber {
if ((self = [super initWithFrame:frame])) {
self.currentValue = 0;
self.numberOfSections = sectionsNumber;
self.delegate = del;
[self drawWheel];
}
return self;
}
- (void) drawWheel {
container = [[UIView alloc] initWithFrame:self.frame];
CGFloat angleSize = 2*M_PI/numberOfSections;
for (int i = 0; i < numberOfSections; i++) {
UIImageView *im = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"segment.png"]];
im.layer.anchorPoint = CGPointMake(1.0f, 0.5f);
im.layer.position = CGPointMake(container.bounds.size.width/2.0-container.frame.origin.x,
container.bounds.size.height/2.0-container.frame.origin.y);
im.transform = CGAffineTransformMakeRotation(angleSize*i);
im.alpha = minAlphavalue;
im.tag = i;
if (i == 0) {
im.alpha = maxAlphavalue;
}
UIImageView *cloveImage = [[UIImageView alloc] initWithFrame:CGRectMake(12, 15, 40, 40)];
cloveImage.image = [UIImage imageNamed:[NSString stringWithFormat:#"icon%i.png", i]];
[im addSubview:cloveImage];
[container addSubview:im];
}
container.userInteractionEnabled = NO;
[self addSubview:container];
cloves = [NSMutableArray arrayWithCapacity:numberOfSections];
UIImageView *bg = [[UIImageView alloc] initWithFrame:self.frame];
bg.image = [UIImage imageNamed:#"bg.png"];
[self addSubview:bg];
UIImageView *mask = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 58, 58)];
mask.image =[UIImage imageNamed:#"centerButton.png"] ;
mask.center = self.center;
mask.center = CGPointMake(mask.center.x, mask.center.y+3);
[self addSubview:mask];
if (numberOfSections % 2 == 0) {
[self buildClovesEven];
} else {
[self buildClovesOdd];
}
[self.delegate wheelDidChangeValue:[self getCloveName:currentValue]];
}
- (UIImageView *) getCloveByValue:(int)value {
UIImageView *res;
NSArray *views = [container subviews];
for (UIImageView *im in views) {
if (im.tag == value)
res = im;
}
return res;
}
- (void) buildClovesEven {
CGFloat fanWidth = M_PI*2/numberOfSections;
CGFloat mid = 0;
for (int i = 0; i < numberOfSections; i++) {
SMClove *clove = [[SMClove alloc] init];
clove.midValue = mid;
clove.minValue = mid - (fanWidth/2);
clove.maxValue = mid + (fanWidth/2);
clove.value = i;
if (clove.maxValue-fanWidth < - M_PI) {
mid = M_PI;
clove.midValue = mid;
clove.minValue = fabsf(clove.maxValue);
}
mid -= fanWidth;
NSLog(#"cl is %#", clove);
[cloves addObject:clove];
}
}
- (void) buildClovesOdd {
CGFloat fanWidth = M_PI*2/numberOfSections;
CGFloat mid = 0;
for (int i = 0; i < numberOfSections; i++) {
SMClove *clove = [[SMClove alloc] init];
clove.midValue = mid;
clove.minValue = mid - (fanWidth/2);
clove.maxValue = mid + (fanWidth/2);
clove.value = i;
mid -= fanWidth;
if (clove.minValue < - M_PI) {
mid = -mid;
mid -= fanWidth;
}
[cloves addObject:clove];
NSLog(#"cl is %#", clove);
}
}
- (float) calculateDistanceFromCenter:(CGPoint)point {
CGPoint center = CGPointMake(self.bounds.size.width/2.0f, self.bounds.size.height/2.0f);
float dx = point.x - center.x;
float dy = point.y - center.y;
return sqrt(dx*dx + dy*dy);
}
- (BOOL)beginTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event {
CGPoint touchPoint = [touch locationInView:self];
float dist = [self calculateDistanceFromCenter:touchPoint];
if (dist < 40 || dist > 100)
{
// forcing a tap to be on the ferrule
NSLog(#"ignoring tap (%f,%f)", touchPoint.x, touchPoint.y);
return NO;
}
float dx = touchPoint.x - container.center.x;
float dy = touchPoint.y - container.center.y;
deltaAngle = atan2(dy,dx);
startTransform = container.transform;
UIImageView *im = [self getCloveByValue:currentValue];
im.alpha = minAlphavalue;
return YES;
}
- (BOOL)continueTrackingWithTouch:(UITouch*)touch withEvent:(UIEvent*)event
{
CGPoint pt = [touch locationInView:self];
float dist = [self calculateDistanceFromCenter:pt];
if (dist < 40 || dist > 100)
{
// a drag path too close to the center
NSLog(#"drag path too close to the center (%f,%f)", pt.x, pt.y);
// here you might want to implement your solution when the drag
// is too close to the center
// You might go back to the clove previously selected
// or you might calculate the clove corresponding to
// the "exit point" of the drag.
}
float dx = pt.x - container.center.x;
float dy = pt.y - container.center.y;
float ang = atan2(dy,dx);
float angleDifference = deltaAngle - ang;
container.transform = CGAffineTransformRotate(startTransform, -angleDifference);
return YES;
}
- (void)endTrackingWithTouch:(UITouch*)touch withEvent:(UIEvent*)event
{
CGFloat radians = atan2f(container.transform.b, container.transform.a);
CGFloat newVal = 0.0;
for (SMClove *c in cloves) {
if (c.minValue > 0 && c.maxValue < 0) { // anomalous case
if (c.maxValue > radians || c.minValue < radians) {
if (radians > 0) { // we are in the positive quadrant
newVal = radians - M_PI;
} else { // we are in the negative one
newVal = M_PI + radians;
}
currentValue = c.value;
}
}
else if (radians > c.minValue && radians < c.maxValue) {
newVal = radians - c.midValue;
currentValue = c.value;
}
}
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:0.2];
CGAffineTransform t = CGAffineTransformRotate(container.transform, -newVal);
container.transform = t;
[UIView commitAnimations];
[self.delegate wheelDidChangeValue:[self getCloveName:currentValue]];
UIImageView *im = [self getCloveByValue:currentValue];
im.alpha = maxAlphavalue;
}
- (NSString *) getCloveName:(int)position {
NSString *res = #"";
switch (position) {
case 0:
res = #"Circles";
break;
case 1:
res = #"Flower";
break;
case 2:
res = #"Monster";
break;
case 3:
res = #"Person";
break;
case 4:
res = #"Smile";
break;
case 5:
res = #"Sun";
break;
case 6:
res = #"Swirl";
break;
case 7:
res = #"3 circles";
break;
case 8:
res = #"Triangle";
break;
default:
break;
}
return res;
}
#end
Main Methods which will help you to track the swipe are
- (float) calculateDistanceFromCenter:(CGPoint)point
- (BOOL)beginTrackingWithTouch:(UITouch *)touch withEvent:(UIEvent *)event
- (BOOL)continueTrackingWithTouch:(UITouch*)touch withEvent:(UIEvent*)event
- (void)endTrackingWithTouch:(UITouch*)touch withEvent:(UIEvent*)event
May this will help you :)
Though trigonometry is one approach to the math, it's simpler and requires much less processor power to do this with vectors.
A picture for reference:
The center of the dial you want to spin is point C.
From the user interface you must get a swipe start point A and "swipe vector" s that shows how the user's finger is moving. If the OS provides only a second point B some time after A, then compute s = B - A.
You want to compute the component of s that's tangent to the circle centered at C passing through A. This will allow the user to start his/her swipe anywhere and have it treated as a torque about point C. This ought to be intuitive.
This is not hard. The radius of the circle is shown as vector r = A - C. The perpendicular to this vector is called "r perp" shown with the "thumbtack" symbol in the picture. It is just the point (-y, x) where x and y are the components of r.
The signed length of the projection of p onto perp(r) is just a normalized dot product:
This is a scalar that is positive if rotation is counter clockwise around C and negative if clockwise. So the sign tells you the direction of rotation. The absolute value tells you how much or how fast to rotate.
Suppose we already have swipe vector s stored as sx and sy and center C as cx and cy. Then the pseudo code is just:
r_perp_x = cy - ay; r_perp_y = ax - cx;
signed_length_p = (sx * r_perp_x + sy * r_perp_y) / sqrt(r_perp_x ^ 2 + r_perp_y ^ 2)
The desired number is signed_length_p.
The only caveat is to ignore touches A that are close to C. They can produce very large output values or division by zero. This is easy to fix. Just check the length of r and quit if it's less than some reasonable value.
If your current solution is "almost fine" for you - simplest thing to do would be ...just fixing it with two more areas:
Now you spin your image clockwise whenever user swiped
- right or up (started in area A)
- right or down (started in area B)
- left or down (started in area D)
- left or up (started in area C)
....else - spin it anticlockwise.
I have done something similar with Xamarin.iOS but I doubt you want to see C# code so maybe this GitHub project will give you the necessary info:
https://github.com/hollance/MHRotaryKnob
I do this quite a bit in my code:
self.sliderOne.frame = CGRectMake(newX, 0, self.sliderOne.frame.size.width, self.sliderOne.frame.size.height);
Is there any way to avoid this tedious code? I have tried this type of thing:
self.sliderOne.frame.origin.x = newX;
but I get a Lvalue required as left operand of assignment error.
I finally followed #Dave DeLong's suggestion and made a category. All you have to do is import it in any class that wants to take advantage of it.
UIView+AlterFrame.h
#import <UIKit/UIKit.h>
#interface UIView (AlterFrame)
- (void) setFrameWidth:(CGFloat)newWidth;
- (void) setFrameHeight:(CGFloat)newHeight;
- (void) setFrameOriginX:(CGFloat)newX;
- (void) setFrameOriginY:(CGFloat)newY;
#end
UIView+AlterFrame.m
#import "UIView+AlterFrame.h"
#implementation UIView (AlterFrame)
- (void) setFrameWidth:(CGFloat)newWidth {
CGRect f = self.frame;
f.size.width = newWidth;
self.frame = f;
}
- (void) setFrameHeight:(CGFloat)newHeight {
CGRect f = self.frame;
f.size.height = newHeight;
self.frame = f;
}
- (void) setFrameOriginX:(CGFloat)newX {
CGRect f = self.frame;
f.origin.x = newX;
self.frame = f;
}
- (void) setFrameOriginY:(CGFloat)newY {
CGRect f = self.frame;
f.origin.y = newY;
self.frame = f;
}
#end
I could DRY up the methods using blocks... I'll do that at some point soon, I hope.
Later: I just noticed CGRectOffset and CGRectInset, so this category could be cleaned up a bit (if not eliminated altogether).
Yeah, you have to do:
CGRect newFrame = self.sliderOne.frame;
newFrame.origin.x = frame.size.width - MARGIN * 2 - totalWidth;
self.sliderOne.frame = newFrame;
It sucks, I know. If you find yourself doing this a lot, you may want to add categories to UIView/NSView to alter this stuff for you:
#interface UIView (FrameMucking)
- (void) setWidth:(CGFloat)newWidth;
#end
#implementation UIView (FrameMucking)
- (void) setWidth:(CGFloat)newWidth {
CGRect f = [self frame];
f.size.width = newWidth;
[self setFrame:f];
}
#end
Etc.
The issue here is that self.sliderOne.frame.origin.x is the same thing as [[self sliderOne] frame].origin.x. As you can see, assigning back to the lValue here is not what you want to do.
So no, that "tedious" code is necessary, although can be shortened up a bit.
CGRect rect = thing.frame;
thing.frame = CGRectMake(CGRectGetMinX(rect), CGRectGetMinY(rect) + 10, etc...);