Change Brush Size Using UISlider - objective-c

I am trying to use the UISlider to change the size of the brush for the iPad app school project that I am making but I can't figure out how.
I already have a UISlider connected to its outlet and action.
Here's my code for the brush size.
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 25.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), redAmt, greenAmt, blueAmt, alpha);
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), endingPoint.x, endingPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), endingPoint.x, endingPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
CGContextFlush(UIGraphicsGetCurrentContext());
touchDraw.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

I'm not sure I follow you. Right now you're just setting the line width to 25.0f. Why don't you set a variable to the value of the UISlider each time it changes, and set the line width to that?

This line:
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 25.0);
needs to be something like this:
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), self.brushSizeSlider.value);
where brushSizeSlider is the name of the IBOutlet property wired to your brush size slider.

Related

Saving contents of imageContext?

my project is to create a heat map app. In order to access the heat map of previously recorded teams, I need to know how to save drawings that I made using image context.
- (void)drawRect
{
CGFloat dashes[2] = {25,25};
UIGraphicsBeginImageContext(self.field.frame.size);
[self.field.image drawInRect:CGRectMake(0, 0, self.field.frame.size.width, self.field.frame.size.height)];
if([_diagramselect selectedSegmentIndex]==3){
CGContextSetLineDash(UIGraphicsGetCurrentContext(), 0,dashes, 2);
}
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), startlocation.x-56, startlocation.y-92);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), endlocation.x-56, endlocation.y-92);
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapSquare);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), brush );
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), red, green, blue, 1.0);
CGContextSetBlendMode(UIGraphicsGetCurrentContext(),kCGBlendModeNormal);
CGContextStrokePath(UIGraphicsGetCurrentContext());
self.field.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
How can I save what I've drawn on top of a field image to access later? Is it possible?
You can create image ref from context.
CGImageRef imageMasked = CGBitmapContextCreateImage(context);
UIImage *finalImage = [UIImage imageWithCGImage:imageMasked];

ios - unable to erase the UIImage like Rubber

in my app i have two views one above the other. I the below view i have a captured image and in the above view i have placed an image. If the user needs to make some changes in the above image i am giving them an option of erasing the image.
Everything is working fine the problem is if i am trying to erase the image it seems to be a broken one. The image is not getting deleted in a gentle way. when i erase it looks like the below one.
I want it to be as follow
how to do this, pls help me
Following is my code:
UIGraphicsBeginImageContext(frontImage.frame.size);
[frontImage.image drawInRect:CGRectMake(0, 0, frontImage.frame.size.width, frontImage.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 10);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1, 0, 0, 10);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextClearRect (UIGraphicsGetCurrentContext(), CGRectMake(lastPoint.x, lastPoint.y, 50, 50));
CGContextStrokePath(UIGraphicsGetCurrentContext());
frontImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
The problem is that you're using a square brush, and you're only erasing that square at a single point on the line the user drew. Also, your stroke color is completely wrong for what you're trying to do.
It looks like you're trying to set the stroke to a clear color and draw a line between the previous and current points. If you want to do that, you should do this:
CGContextRef currCtx = UIGraphicsGetCurrentContext();
CGContextSetRGBStrokeColor (currCtx, 0, 0, 0, 0);
CGContextMoveToPoint (currCtx, startPt.x, startPt.y);
CGContextLineToPoint (currCtx, endPt.x, endPt.y);
In this case startPt.x/y is the location of the previous touch, and endPt.x/y is the current touch.
Note that to get lines as nice as in the picture you posted, you'll need to actually use an antialiased texture and draw it at every point along the line, varying its size. But the above should get you something workable that looks pretty good.
UIGraphicsBeginImageContext(imageView1.frame.size);
[imageView1.image drawInRect:CGRectMake(0, 0,imageView1.frame.size.width, imageView1.frame.size.height)];
CGContextSetBlendMode(UIGraphicsGetCurrentContext( ),kCGBlendModeClear);
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext( ), 25.0);
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(), [[UIColor clearColor] CGColor]);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), point.x, point.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), point.x, point.y);
CGContextStrokePath(UIGraphicsGetCurrentContext()) ;
imageView1.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Resize UIImageView after finish drawing

I'm trying to draw line in my app. First i set the drawing window size as my view size. After finish drawing, i want to resize the UIImageView so that there's no empty spaces around the drawing. Is there an option to auto resize the UIImageView like sizeToFit in UILabel?
UIGraphicsBeginImageContext(self.vwDesktop.frame.size);
[drawImage.image drawInRect:CGRectMake(0, 0, self.vwDesktop.frame.size.width, self.vwDesktop.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 10.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 0.0, 0.0, 1.0);
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
CGContextFlush(UIGraphicsGetCurrentContext());
drawImage.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
drawImage.userInteractionEnabled = YES;
If I understand your question correctly, is it the contentMode property on UIView (and hence UIImageView that you want? This will determine how the image is sized to fit the view.
As an aside, you don't need to call UIGraphicsGetCurrentContext() in each of your Core Graphics functions. Why not just save the context in a local variable:
CGContextRef ctx = UIGraphicsGetCurrentContext();
and use that CGContextRef each time you need to refer to the current context?
However, it looks to me that you could just do all of this inside a UIView's drawRect method without having to deal with creating an image context, unless for some reason you want to keep the UIImage of the line that you are drawing.

CoreGraphics drawing on a large UIImage

I have a UIImage I draw lines on it that follow the users finger. It's like a drawing board. This works perfectly when the UIImage is small, say 500 x 600, but if it's like 1600 x 1200, it gets really scratchy and laggy. Is there a way I can optimize this? This is my drawing code in touchesModed:
UIGraphicsBeginImageContext(self.frame.size);
[drawImageView.image drawInRect:CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)];
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 15.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 0.0, 0.0, 1.0);
//CGContextSetAlpha(UIGraphicsGetCurrentContext(), 0.5f);
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), currentPoint.x, currentPoint.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
drawImageView.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Thanks.
Instead of drawing the entire 1600X1200 frame at a single go, why not draw it as it's needed. What I mean is since you are drawing the entire frame (which resides in memory) you are having patchy performance.
Try CATiledLayer. Basically you need to draw a screen size only which your device is capable of, anything more than that as the user scrolls you need to draw it on the fly.
This is what is used in Google Maps on iPad or iPhone. Hope this helps...
Instead of creating a new context and drawing the current image into it every time a touch moves, create a context using CGBitmapContextCreate and reuse that context. All previous drawing will already be in the context, and you won't have to create a new context each time a touch moves.
- (CGContextRef)drawingContext {
if(!context) { // context is an instance variable of type CGContextRef
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
if(!colorSpace) return nil;
context = CGBitmapContextCreate(NULL, contextSize.width, contextSize.height,
8, contextSize.width * 32, colorSpace,
kCGImageAlphaPremultipliedFirst);
CGColorSpaceRelease(colorSpace);
if(!context) return nil;
CGContextConcatCTM(context, CGAffineTransformMake(1,0,0,-1,0,contextSize.height));
CGContextDrawImage(context, (CGRect){CGPointZero,contextSize}, drawImageView.image.CGImage);
CGContextSetLineCap(UIGraphicsGetCurrentContext(), kCGLineCapRound);
CGContextSetLineWidth(UIGraphicsGetCurrentContext(), 15.0);
CGContextSetRGBStrokeColor(UIGraphicsGetCurrentContext(), 1.0, 0.0, 0.0, 1.0);
}
return context;
}
- (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
CGContextRef ctxt = [self drawingContext];
CGContextBeginPath(ctxt);
CGContextMoveToPoint(ctxt, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(ctxt, currentPoint.x, currentPoint.y);
CGContextStrokePath(ctxt);
CGImageRef img = CGBitmapContextCreateImage(ctxt);
drawImageView.image = [UIImage imageWithCGImage:img];
CGImageRelease(img);
}
This code requires the instance variables CGContextRef context and CGSize contextSize. The context will need to be released in dealloc, and whenever you change its size.

How to implement the color wheel inside the UIView?

I searched the web site, and I found how to draw a color wheel...
Math behind the Colour Wheel
But I would like to implement it by drawing on the UIView. But how can I do using the Quartz technology? I should draw from by dots or lines? Thank you
Maybe its use for you.
CGContext*** functions to draw dots or lines, and UIGraphicsGetImageFromCurrentImageContext to get UIImage item.
-(void) drawPoint:(CGPoint)point {
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), point.x, point.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), point.x, point.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
}
-(void) drawLineFrom:(CGPoint)start
to:(CGPoint)end {
CGContextBeginPath(UIGraphicsGetCurrentContext());
CGContextMoveToPoint(UIGraphicsGetCurrentContext(), start.x, start.y);
CGContextAddLineToPoint(UIGraphicsGetCurrentContext(), end.x, end.y);
CGContextStrokePath(UIGraphicsGetCurrentContext());
}
Draw UIImage to UIView.
UIColor* backColor = [UIColor colorWithPatternImage:your_image_from_cgcontext];
view.backgroundColor = backColor;