I'm creating an app, and in one part, I'm trying to implement hand-writing. I've used an imageView to write on, which will be saved and sent to a server as a pdf. I have implemented touch begin, move and end, and using contextAddLineToPoint, I can create the lines as user writes the stuff. However. The writing is a bit pointy, and I'm trying to create curves when user changes the direction of the writing, i.e. when a letter is written. I don't really want to implement hand-writing recognition, but just to smoothen the line being drawn.
I've created a "buffer", made up of an array, which holds two intermediate points when user moves the touch. as follow:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
lastPoint = [touch locationInView:drawImage];
NSLog(#"first touch:%#",[NSValue valueWithCGPoint:lastPoint]);
}
drawImage is the imageView I'm using to write on btw. Then goes to touch moved:
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
currentPoint = [touch locationInView:drawImage];
if (movementCount<2) {
[pointHolderArray addObject:[NSValue valueWithCGPoint:currentPoint]];
movementCount ++;
NSLog(#"pointHolderArray: %#",pointHolderArray);
}else
{
NSLog(#"buffer full");
UIGraphicsBeginImageContext(drawImage.frame.size);
[drawImage.image drawInRect:drawImage.frame];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetAllowsAntialiasing(UIGraphicsGetCurrentContext(), YES);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 3.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 0.3, 0.5, 0.2, 1.0);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(),[[pointHolderArray objectAtIndex:0]CGPointValue].x,[[pointHolderArray objectAtIndex:0]CGPointValue].y);
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), [[pointHolderArray objectAtIndex:0]CGPointValue].x,[[pointHolderArray objectAtIndex:0]CGPointValue].y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), [[pointHolderArray objectAtIndex:1]CGPointValue].x,[[pointHolderArray objectAtIndex:1]CGPointValue].y);
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), [[pointHolderArray objectAtIndex:1]CGPointValue].x,[[pointHolderArray objectAtIndex:1]CGPointValue].y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
lastPoint = [touch previousLocationInView:drawImage];
[pointHolderArray removeAllObjects];
movementCount=0;
}
}
As you can see, every time two points have been stored, I then draw the line between them. This has made the drawing slightly harder, and the line is even more ragged.
Can anyone help with the problem, I'm very new to graphics in iOS, and not sure if I'm even using the right API, and if I should even be using an imageView.
Thanks alot in advance
How about using Pan Gesture Recognizer and GLPaint. You set something like this in viewDidLoad:
- (void)viewDidLoad
{
[super viewDidLoad];
UIPanGestureRecognizer *g = [[UIPanGestureRecognizer alloc]initWithTarget:self action:#selector(viewPanned:)];
..
Then, capture the pan movement:
-(void) viewPanned:(UIPanGestureRecognizer *)g
{
CGPoint point = [g locationInView:paintView];
// invert y for OpenGL
point.y = paintView.bounds.size.height - point.y ;
switch (g.state) {
case UIGestureRecognizerStateBegan:
prevPoint = point ;
break ;
case UIGestureRecognizerStateChanged:
[paintView renderLineFromPoint:prevPoint toPoint:point] ;
prevPoint = point ;
break ;
case UIGestureRecognizerStateEnded:
prevPoint = point ;
break ;
}
}
The 'paintView' shown here is the instance of PaintView which you can find in GLPaint example in Apple sample code. The example does not show how to change the pen size, but you can do it by setting various glPointSize.
- (void)setBrushSize:(CGFloat)size
{
if( size <= 1.0 ) {
glPointSize(10);
}
else if ( size <= 2.0 ) {
glPointSize(20);
}
else if ( size <= 3.0 ) {
glPointSize(30);
}
else if ( size <= 4.0 ) {
glPointSize(40);
}
..
Hope this helps..
Related
I have a certain point at the bottom of the screen . . . when I touch the screen somewhere, I'd like a dotted line to appear between the point, and the point my finger is at. The length and rotation of the line will change based on where my finger is, or moves to.
I'm assuming I'd make the dotted line with a repetition of a small line image, but I guess that's why I need your help!
Note that all this can be organized better, and I personally don't like SKShapeNode in any shape :) or form, but this is the one way to do it:
#import "GameScene.h"
#implementation GameScene{
SKShapeNode *line;
}
-(void)didMoveToView:(SKView *)view {
/* Setup your scene here */
line = [SKShapeNode node];
[self addChild:line];
[line setStrokeColor:[UIColor redColor]];
}
-(void)drawLine:(CGPoint)endingPoint{
CGMutablePathRef pathToDraw = CGPathCreateMutable();
CGPathMoveToPoint(pathToDraw, NULL, CGRectGetMidX(self.frame),CGRectGetMidY(self.frame));
CGPathAddLineToPoint(pathToDraw, NULL, endingPoint.x,endingPoint.y);
CGFloat pattern[2];
pattern[0] = 20.0;
pattern[1] = 20.0;
CGPathRef dashed =
CGPathCreateCopyByDashingPath(pathToDraw,NULL,0,pattern,2);
line.path = dashed;
CGPathRelease(dashed);
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
/* Called when a touch begins */
for (UITouch *touch in touches) {
CGPoint location = [touch locationInNode:self];
[self drawLine:location];
}
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
for (UITouch *touch in touches) {
CGPoint location = [touch locationInNode:self];
[self drawLine:location];
}
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
line.path = nil;
}
The result is:
Also I don't know how much performant this is, but you can test it, tweak it and improve it. Or even use SKSpriteNode like you said. Happy coding!
EDIT:
I just noticed that you said dotted (not dashed) :)
You have to change pattern to something like:
pattern[0] = 3.0;
pattern[1] = 3.0;
When researching "How do I detect a touch event on a moving UIImageView?" I've come across several answers and I tried to implement them to my app. Nothing I've come across seems to work. I'll explain what I'm trying to do then post my code. Any thoughts, suggestions, comments or answers are appreciated!
My app has several cards floating across the screen from left to right. These cards are various colors and the object of the game is the drag the cards down to their similarly colored corresponding container. If the user doesn't touch and drag the cards fast enough, the cards will simply drift off the screen and points will be lost. The more cards contained in the correct containers, the better the score.
I've written code using core animation to have my cards float from the left to right. This works. However when attempting to touch a card and drag it toward it's container, it isn't correctly detecting that I'm touching the UIImageView of the card.
To test if my I'm properly implementing the code to move a card, I've also written some code allows movement for a non-moving card. In this case my touch is being detected and acting accordingly.
Why can I only interact with stationary cards? After researching this quite a bit it seems that the code:
options:UIViewAnimationOptionAllowUserInteraction
is the key ingredient to get my moving UIImages to be detected. However I tried this doesn't seem to have any effect.
I another key thing that I may be doing wrong is not properly utilizing the correct presentation layer. I've added code like this to my project and I also only works on non-moving objects:
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Card!");
}
else{
NSLog(#"backgound touched");
}
After trying these types of things I'm getting stuck. Here is my code to understand this a bit more completely:
#import "RBViewController.h"
#import <QuartzCore/QuartzCore.h>
#interface RBViewController ()
#property (nonatomic, strong) UIImageView *card;
#end
#implementation RBViewController
- (void)viewDidLoad
{
srand(time (NULL)); // will be used for random colors, drift speeds, and locations of cards
[super viewDidLoad];
[self setOutFirstCardSet]; // this sends out 4 floating cards across the screen
// the following creates a non-moving image that I can move.
_card = [[UIImageView alloc] initWithFrame:CGRectMake(400,400,100,100)];
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
_card.userInteractionEnabled = YES;
[self.view addSubview:_card];
}
the following method sends out cards from a random location on the left side of the screen and uses core animation to drift the card across the screen. Notice the color of the card and the speed of the drift will be randomly generated as well.
-(void) setOutFirstCardSet
{
for(int i=1; i < 5; i++) // sends out 4 shapes
{
CGRect cardFramei;
int startingLocation = rand() % 325;
CGRect cardOrigini = CGRectMake(-100,startingLocation + 37, 92, 87);
cardFramei.size = CGSizeMake(92, 87);
CGPoint origini;
origini.y = startingLocation + 37;
origini.x = 1200;
cardFramei.origin = origini;
_card.userInteractionEnabled = YES;
_card = [[UIImageView alloc] initWithFrame:cardOrigini];
int randomColor = rand() % 7;
if(randomColor == 0)
{
_card.image = [UIImage imageNamed:#"goodguy.png"];
}
else if (randomColor == 1)
{
_card.image = [UIImage imageNamed:#"goodguyPINK.png"];
}
else if (randomColor == 2)
{
_card.image = [UIImage imageNamed:#"goodGuyPURPLE.png"];
}
else if (randomColor == 3)
{
_card.image = [UIImage imageNamed:#"goodGuyORANGE.png"];
}
else if (randomColor == 4)
{
_card.image = [UIImage imageNamed:#"goodGuyLightPINK.png"];
}
else if (randomColor == 5)
{
_card.image = [UIImage imageNamed:#"goodGuyBLUE.png"];
}
else if (randomColor == 6)
{
_card.image = [UIImage imageNamed:#"goodGuyGREEN.png"];
}
_card.userInteractionEnabled = YES; // this is also written in my viewDidLoad method
[[_card.layer presentationLayer] hitTest:origini]; // not really sure what this does
[self.view addSubview:_card];
int randomSpeed = rand() % 20;
int randomDelay = rand() % 2;
[UIView animateWithDuration:randomSpeed + 10
delay: randomDelay + 4
options:UIViewAnimationOptionAllowUserInteraction // here is the method that I thought would allow me to interact with the moving cards. Not sure why I can't
animations: ^{
_card.frame = cardFramei;
}
completion:NULL];
}
}
notice the following method is where I put CALayer and hit test information. I'm not sure if I'm doing this correctly.
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
UIView *myTouchedView = [t view];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [self.view.layer convertPoint:thePoint toLayer:self.view.layer.superlayer];
CALayer *theLayer = [self.view.layer hitTest:thePoint];
if([_card.layer.presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!"); // This only logs when I touch a non-moving shape
}
else{
NSLog(#"backgound touched"); // this logs when I touch the background or an moving shape.
}
if(myTouchedView == _card)
{
NSLog(#"Touched a card");
_boolHasCard = YES;
}
else
{
NSLog(#"Didn't touch a card");
_boolHasCard = NO;
}
}
I want the following method to work on moving shapes. It only works on non-moving shapes. Many answers say to have the touch ask which class the card is from. As of now all my cards on of the same class (the viewController class). When trying to have the cards be their own class, I was having trouble having that view appear on my main background controller. Must I have various cards be from different classes for this to work, or can I have it work without needing to do so?
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *touch = [[event allTouches] anyObject];
if([touch view]==self.card)
{
CGPoint location = [touch locationInView:self.view];
self.card.center=location;
}
}
This next method resets the movement of a card if the user starts moving it and then lifts up on it.
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event
{
if(_boolHasCard == YES)
{
[UIView animateWithDuration:3
delay: 0
options:UIViewAnimationOptionAllowUserInteraction
animations: ^{
CGRect newCardOrigin = CGRectMake(1200,_card.center.y - 92/2, 92, 87);
_card.frame = newCardOrigin;
}
completion:NULL];
}
}
#end
The short answer is, you can't.
Core Animation does not actually move the objects along the animation path. They move the presentation layer of the object's layer.
The moment the animation begins, the system thinks the object is at it's destination.
There is no way around this if you want to use Core Animation.
You have a couple of choices.
You can set up a CADisplayLink on your view controller and roll your own animation, where you move the center of your views by a small amount on each call to the display link. This might lead to poor performance and jerky animation if you're animating a lot of objects however.
You can add a gesture recognizer to the parent view that contains all your animations, and then use layer hit testing on the paren't view's presentation view to figure out which animating layer got tapped, then fetch that layer's delegate, which will be the view you are animating. I have a project on github that shows how to do this second technique. It only detects taps on a single moving view, but it will show you the basics: Core Animation demo project on github.
(up-votes always appreciated if you find this post helpful)
It looks to me that your problem is really with just an incomplete understanding of how to convert a point between coordinate spaces. This code works exactly as expected:
- (void)viewDidLoad
{
[super viewDidLoad];
CGPoint endPoint = CGPointMake([[self view] bounds].size.width,
[[self view] bounds].size.height);
CABasicAnimation *animation = [CABasicAnimation
animationWithKeyPath:#"position"];
animation.fromValue = [NSValue valueWithCGPoint:[[_imageView layer] position]];
animation.toValue = [NSValue valueWithCGPoint:endPoint];
animation.duration = 30.0f;
[[_imageView layer] addAnimation:animation forKey:#"position"];
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[_imageView layer].presentationLayer hitTest:thePoint])
{
NSLog(#"You touched a Shape!");
}
else{
NSLog(#"backgound touched");
}
}
Notice the line in particular:
thePoint = [[_imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
When I tap on the layer image view while it's animating, I get "You touched a Shape!" in the console window and I get "background touched" when I tap around it. That's what you're wanting right?
Here's a sample project on Github
UPDATE
To help with your follow up question in the comments, I've written the touchesBegan code a little differently. Imagine that you've add all of your image views to an array (cleverly named imageViews) when you create them. You would alter your code to look something like this:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch *t = [touches anyObject];
CGPoint thePoint = [t locationInView:self.view];
for (UIImageView *imageView in [self imageViews]) {
thePoint = [[imageView layer] convertPoint:thePoint
toLayer:[[self view] layer]];
if([[imageView layer].presentationLayer hitTest:thePoint]) {
NSLog(#"Found it!!");
break; // No need to keep iterating, we've found it
} else{
NSLog(#"Not this one!");
}
}
}
I'm not sure how expensive this is, so you may have to profile it, but it should do what you're expecting.
I am trying to make a drawing app that has a control on the opacity of the brush but when i tried to lower the opacity the result is like this. I used core graphics. (check the image).
How will i resolve this issue?
Here are some of my codes.
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event//upon touches
{
UITouch *touch = [touches anyObject];
previousPoint1 = [touch locationInView:self.view];
previousPoint2 = [touch locationInView:self.view];
currentTouch = [touch locationInView:self.view];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event//upon moving
{
UITouch *touch = [touches anyObject];
previousPoint2 = previousPoint1;
previousPoint1 = currentTouch;
currentTouch = [touch locationInView:self.view];
CGPoint mid1 = midPoint(previousPoint2, previousPoint1);
CGPoint mid2 = midPoint(currentTouch, previousPoint1);
UIGraphicsBeginImageContext(CGSizeMake(1024, 768));
[imgDraw.image drawInRect:CGRectMake(0, 0, 1024, 768)];
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineCap(context,kCGLineCapRound);
CGContextSetLineWidth(context, slider.value);
CGContextSetBlendMode(context, blendMode);
CGContextSetRGBStrokeColor(context,red, green, blue, 0.5);
CGContextBeginPath(context);
CGContextMoveToPoint(context, mid1.x, mid1.y);
CGContextAddQuadCurveToPoint(context, previousPoint1.x, previousPoint1.y, mid2.x, mid2.y);
CGContextStrokePath(context);
imgDraw.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsGetCurrentContext();
endingPoint=currentTouch;
}
Rather than updating the image for every touchesMoved, I'd suggest you don't resave the image every time, but rather add each touchesMoved to an array of data points. Then, in your drawing routine (as NSResponder suggests) pull up the original image and then redraw the whole path mapped out by your array of points. If you want to update your image at some point, do it on touchesEnded, but not for every touchesMoved.
Don't do your drawing in -touchesMoved:withEvent:. In any event method, you should be updating what needs to be drawn, and then send -setNeedsDisplay.
As your code is written above, you're creating a path between each pair of locations you're getting from the event messages.
You should create a path in -touchesBegan:withEvent:, and add to it in -touchesMoved:withEvent:.
I have two images. Both of the same scene, of the same size, that take up the entire screen. One is image blurry, one is in focus. The desired effect is that the user will initially see the blurry image and as they drag their finger across the screen from left to right they will see the part of the image to the left of where they drag be in focus (for example if they drag only halfway across, then the left-half of the scene is the focused image while the right half is still blurry.
I'm doing this by subclassing UIImageView with the focused image as the image - then adding a CALayer of the blurry image with a mask applied, then changing the location of the mask according to touchesBegan/touchesMoved. The problem is that performance is very slow using the approach below. So I'm wondering what I'm doing wrong.
#interface DragMaskImageView : UIImageView
#end
#implementation DragMaskImageView{
BOOL userIsTouchingMask;
CALayer *maskingLayer;
CALayer *topLayer;
float horzDistanceOfTouchFromCenter;
}
- (void)awakeFromNib {
topLayer = [CALayer layer];
topLayer.contents = (id) [UIImage imageNamed:#"blurryImage.jpg"].CGImage;
topLayer.frame = CGRectMake(0, 0, 480, 300);
[self.layer addSublayer:topLayer];
maskingLayer = [CALayer layer];
maskingLayer.contents = (id) [UIImage imageNamed:#"maskImage.png"].CGImage;
maskingLayer.anchorPoint = CGPointMake(0.0, 0.0);
maskingLayer.bounds = CGRectMake(0, 0, 480, 300);
[topLayer setMask:maskingLayer];
}
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
CGPoint touchPoint = [[[event allTouches] anyObject] locationInView:self];
if (touchPoint.x < maskingLayer.frame.origin.x) {
NSLog(#"user is touching to the left of mask - disregard");
userIsTouchingMask = NO;
} else {
NSLog(#"user is touching ");
horzDistanceOfTouchFromCenter = touchPoint.x - maskingLayer.frame.origin.x;
userIsTouchingMask = YES;
}
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
if (userIsTouchingMask) {
CGPoint touchPoint = [[[event allTouches] anyObject] locationInView:self];
float newMaskX = touchPoint.x - horzDistanceOfTouchFromCenter;
if (newMaskX < 0) {
newMaskX = 0;
}
if (newMaskX > 480) {
newMaskX = 480;
}
maskingLayer.frame = CGRectMake(newMaskX, 0 ,480, 300);
}
}
I checked the related thread core animation calayer mask animation performance but setting shouldRasterize to YES on any layer doesn't seem to help the performance problem.
I ran into the same problem, and updating the frame of the mask always lags behind the touch. I found that actually re-creating the mask with the new frame ensures that the mask is always updated instantly.
Perhaps the problem is that the layer's implicit animation is causing it to appear to be sluggish. Turning off the implicit animation should fix the problem:
[CATransaction begin];
[CATransaction setDisableActions:YES];
maskingLayer.frame = CGRectMake(newMaskX, 0 ,480, 300);
[CATransaction commit];
This is why #Rizwan's work-around works. It's a way of bypassing the implicit animation.
I'm designing an app that involves linear graphs. What I have right now is a line with three dots: one in the center for moving the whole line, and the other two, lower and higher on the line, for rotating it. The problem that I have is figuring out how to rotate the other two dots when the user touches on either of them.
I haven't actually done much with the rotation dots yet, but here's my code so far:
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch* touch = [touches anyObject];
CGPoint point = [touch locationInView:graph];
if (CGRectContainsPoint(moveDot.frame, point)) {
moveDotIsTouched = YES;
}
if (CGRectContainsPoint(rotateDot1.frame, point)) {
rotateDot1IsTouched = YES;
}
if (CGRectContainsPoint(rotateDot2.frame, point)) {
rotateDot2IsTouched = YES;
}
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event
{
UITouch* touch = [touches anyObject];
CGPoint point = [touch locationInView:graph];
if (moveDotIsTouched) {
if ((point.y > 0) && (point.y < 704))
{
if (point.y < 64) {rotateDot1.hidden = YES;}
else {rotateDot1.hidden = NO;}
if (point.y > 640) {rotateDot2.hidden = YES;}
else {rotateDot2.hidden = NO;}
moveDot.center = CGPointMake(moveDot.center.x, point.y);
rotateDot1.center = CGPointMake(moveDot.center.x+64, moveDot.center.y-64);
rotateDot2.center = CGPointMake(moveDot.center.x-64, moveDot.center.y+64);
graph.base = -(point.y)/32 + 11;
if (graph.base < 0) {[functionField setText:[NSString stringWithFormat:#"y = %.2fx %.2f", graph.slope, graph.base]];}
else if (graph.base > 0) {[functionField setText:[NSString stringWithFormat:#"y = %.2fx + %.2f", graph.slope, graph.base]];}
else {[functionField setText:[NSString stringWithFormat:#"y = %.2fx", graph.slope]];}
[yintField setText:[NSString stringWithFormat:#"%.2f", graph.base]];
[self changeTable];
}
}
[graph setNeedsDisplay];
}
The numbers involving the positioning of the dots are measured in pixels. The only way I can really think of rotating the outside dots is by calculating a radius and making sure the dots are within the radius, but I'm wondering if there isn't something simpler than that. Does anyone know a simple way to move a UIImageView in a circle?
KTOneFingerRotationGestureRecognizer is a custom UIGestureRecognizer for doing one finger rotations in iOS apps. It tracks finger movement around a central point. I don't know if this is exactly what you want, but it may point you in the right direction.