Stroking paths using a pattern color to simulate brush texture - objective-c

I am making a finger painting app and I am having a hard time giving my brush a nice brush-like feel and texture.
I have this code:
UIColor * brushTexture = [UIColor colorWithPatternImage:[UIImage imageNamed:#"Brush_Black.png"]];
CGContextSetStrokeColorWithColor(UIGraphicsGetCurrentContext(), brushTexture.CGColor);
but my output is like this:
The texture of my brush is tiled. How can I make it fit to the stroke Size and stop the tiling of the image?

Here i found a solution!
UIImage *texture = [UIImage imageNamed:"brush.png"]
CGPoint vector = CGPointMake(currentPoint.x - endPoint.x, currentTPoint.y - endPoint.y);
CGFloat distance = hypotf(vector.x, vector.y);
vector.x /= distance;
vector.y /= distance;
for (CGFloat i = 0; i < distance; i += 1.0f) {
CGPoint p = CGPointMake(endPoint.x + i * vector.x, endPoint.y + i * vector.y);
[texture drawAtPoint:p blendMode:blendMode alpha:0.5f];
}

Related

Masking a custom shape with UIImageView

I want to programatically crop a shape over my UIImageView. I know about creating a path with QuartzCore, but I don't understand context. Give me an example by subclassing UIImageView.
So how can I make an image go from this:
To this:
I also need the mask to be transparent
The easiest approach is to
create a UIBezierPath for the hexagon;
create a CAShapeLayer from that path; and
add that CAShapeLayer as a mask to the image view's layer.
Thus, it might look like:
CAShapeLayer *mask = [CAShapeLayer layer];
mask.path = [[self polygonPathWithRect:self.imageView.bounds lineWidth:0.0 sides:6] CGPath];
mask.strokeColor = [UIColor clearColor].CGColor;
mask.fillColor = [UIColor whiteColor].CGColor;
self.imageView.layer.mask = mask;
where
/** Create UIBezierPath for regular polygon inside a CGRect
*
* #param square The CGRect of the square in which the path should be created.
* #param lineWidth The width of the stroke around the polygon. The polygon will be inset such that the stroke stays within the above square.
* #param sides How many sides to the polygon (e.g. 6=hexagon; 8=octagon, etc.).
*
* #return UIBezierPath of the resulting polygon path.
*/
- (UIBezierPath *)polygonPathWithRect:(CGRect)square
lineWidth:(CGFloat)lineWidth
sides:(NSInteger)sides
{
UIBezierPath *path = [UIBezierPath bezierPath];
CGFloat theta = 2.0 * M_PI / sides; // how much to turn at every corner
CGFloat squareWidth = MIN(square.size.width, square.size.height); // width of the square
// calculate the length of the sides of the polygon
CGFloat length = squareWidth - lineWidth;
if (sides % 4 != 0) { // if not dealing with polygon which will be square with all sides ...
length = length * cosf(theta / 2.0); // ... offset it inside a circle inside the square
}
CGFloat sideLength = length * tanf(theta / 2.0);
// start drawing at `point` in lower right corner
CGPoint point = CGPointMake(squareWidth / 2.0 + sideLength / 2.0, squareWidth - (squareWidth - length) / 2.0);
CGFloat angle = M_PI;
[path moveToPoint:point];
// draw the sides and rounded corners of the polygon
for (NSInteger side = 0; side < sides; side++) {
point = CGPointMake(point.x + sideLength * cosf(angle), point.y + sideLength * sinf(angle));
[path addLineToPoint:point];
angle += theta;
}
[path closePath];
return path;
}
I posted another answer that illustrates the idea with rounded corners, too.
If you want to implement the addition of this mask as part of a UIImageView subclass, I'll leave that to you. But hopefully this illustrates the basic idea.

More Precise CGPoint for UILongPressGestureRecognizer

I am using a UILongPressGestureRecognizer which works perfectly but the data I get is not precise enough for my use case. CGPoints that I get are rounded off I think.
Example points that I get: 100.5, 103.0 etc. The decimal part is either .5 or .0 . Is there a way to get more precise points? I was hoping for something like .xxxx as in '100.8745' but .xx would do to.
The reason I need this is because I have a circular UIBezierPath, I want to restrict a drag gesture to only that circular path. The item should only be draggable along the circumference of this circle. To do this I calculated 720 points on the circle's boundary using it's radius. Now these points are .xxxx numbers. If I round them off, the drag is not as smooth around the middle section of the circle.This is because in the middle section, the equator, the points on the x-coordinate are very close together. So when I rounded of the y-coordinate, I lost a lot of points and hence the "not so smooth" drag action.
Here is how I calculate the points
for (CGFloat i = -154;i<154;i++) {
CGPoint point = [self pointAroundCircumferenceFromCenter:center forX:i];
[bezierPoints addObject:[NSValue valueWithCGPoint:point]];
i = i - .5;
}
- (CGPoint)pointAroundCircumferenceFromCenter:(CGPoint)center forX:(CGFloat)x
{
CGFloat radius = 154;
CGPoint upperPoint = CGPointZero;
CGPoint lowerPoint = CGPointZero;
//theta used to be the x variable. was first calculating points using the angle
/* point.x = center.x + radius * cosf(theta);
point.y = center.y + radius * sinf(theta);*/
CGFloat y = (radius*radius) - (theta*theta);
upperPoint.x = x+156;
upperPoint.y = 230-sqrtf(y);
lowerPoint.x = x+156;
lowerPoint.y = sqrtf(y)+230;
NSLog(#"x = %f, y = %f",upperPoint.x, upperPoint.y);
[lowerPoints addObject:[NSValue valueWithCGPoint:lowerPoint]];
[upperPoints addObject:[NSValue valueWithCGPoint:upperPoint]];
return upperPoint;
}
I know the code is weird I mean why would I add the points into arrays and return one point back.
Here is how I restrict the movement
-(void)handleLongPress:(UILongPressGestureRecognizer *)recognizer{
CGPoint finalpoint;
CGPoint initialpoint;
CGFloat y;
CGFloat x;
CGPoint tempPoint;
if(recognizer.state == UIGestureRecognizerStateBegan){
initialpoint = [recognizer locationInView:self.view];
CGRect rect = CGRectMake(initialpoint.x, initialpoint.y, 40, 40);
self.hourHand.frame = rect;
self.hourHand.center = initialpoint;
NSLog(#"Long Press Activated at %f,%f",initialpoint.x, initialpoint.y );
}
else if (recognizer.state == UIGestureRecognizerStateChanged){
CGPoint currentPoint = [recognizer locationInView:self.view];
x = currentPoint.x-initialpoint.x;
y = currentPoint.y-initialpoint.y;
tempPoint = CGPointMake( currentPoint.x, currentPoint.y);
NSLog(#"temp point ::%f, %f", tempPoint.x, tempPoint.y);
tempPoint = [self givePointOnCircleForPoint:tempPoint];
self.hourHand.center = tempPoint;
}
else if (recognizer.state == UIGestureRecognizerStateEnded){
// finalpoint = [recognizer locationInView:self.view];
CGRect rect = CGRectMake(tempPoint.x, tempPoint.y, 20, 20);
self.hourHand.frame = rect;
self.hourHand.center = tempPoint;
NSLog(#"Long Press DeActivated at %f,%f",tempPoint.x, tempPoint.y );
}
}
-(CGPoint)givePointOnCircleForPoint:(CGPoint) point{
CGPoint resultingPoint;
for (NSValue *pointValue in allPoints){
CGPoint pointFromArray = [pointValue CGPointValue];
if (point.x == pointFromArray.x) {
// if(point.y > 230.0){
resultingPoint = pointFromArray;
break;
// }
}
}
Basically, I taking the x-coordinate of the "touched point" and returning the y by comparing it to the array of points I calculated earlier.
Currently this code works for half a circle only because, each x has 2 y values because it's a circle, Ignore this because I think this can be easily dealt with.
In the picture, the white circle is the original circle, the black circle is the circle of the points I have from the code+formatting it to remove precision to fit the input I get. If you look around the equator(red highlighted part) you will see a gap between the next points. This gap is my problem.
To answer your original question: On a device with a Retina display, one pixel is 0.5 points, so 0.5 is the best resolution you can get on this hardware.
(On non-Retina devices, 1 pixel == 1 point.)
But it seems to me that you don't need that points array at all. If understand the problem correctly, you can use the following code to
"restrict" (or "project") an arbitrary point to the circumference of the circle:
CGPoint center = ...; // Center of the circle
CGFloat radius = ...; // Radius of the circle
CGPoint point = ...; // The touched point
CGPoint resultingPoint; // Resulting point on the circumference
// Distance from center to point:
CGFloat dist = hypot(point.x - center.x, point.y - center.y);
if (dist == 0) {
// The touched point is the circle center.
// Choose any point on the circumference:
resultingPoint = CGPointMake(center.x + radius, center.y);
} else {
// Project point to circle circumference:
resultingPoint = CGPointMake(center.x + (point.x - center.x)*radius/dist,
center.y + (point.y - center.y)*radius/dist);
}

UIImage resize and crop to fit frame

I know this question has been asked several times, but their answers make my images loose quality. They all become pixelated. So even though it crops and resizes correctly it looses quality.
Just so you can check it, this is the algorithm which is in every post:
- (UIImage*)scaleAndCropImage:(UIImage *)aImage forSize:(CGSize)targetSize
{
UIImage *sourceImage = aImage;
UIImage *newImage = nil;
CGSize imageSize = sourceImage.size;
CGFloat width = imageSize.width;
CGFloat height = imageSize.height;
CGFloat targetWidth = targetSize.width;
CGFloat targetHeight = targetSize.height;
CGFloat scaleFactor = 0.0;
CGFloat scaledWidth = targetWidth;
CGFloat scaledHeight = targetHeight;
CGPoint thumbnailPoint = CGPointMake(0.0,0.0);
if (CGSizeEqualToSize(imageSize, targetSize) == NO)
{
CGFloat widthFactor = targetWidth / width;
CGFloat heightFactor = targetHeight / height;
if (widthFactor > heightFactor)
{
scaleFactor = widthFactor; // scale to fit height
}
else
{
scaleFactor = heightFactor; // scale to fit width
}
scaledWidth = width * scaleFactor;
scaledHeight = height * scaleFactor;
// center the image
if (widthFactor > heightFactor)
{
thumbnailPoint.y = (targetHeight - scaledHeight) * 0.5;
}
else
{
if (widthFactor < heightFactor)
{
thumbnailPoint.x = (targetWidth - scaledWidth) * 0.5;
}
}
}
UIGraphicsBeginImageContext(targetSize); // this will crop
CGRect thumbnailRect = CGRectZero;
thumbnailRect.origin = thumbnailPoint;
thumbnailRect.size.width = scaledWidth;
thumbnailRect.size.height = scaledHeight;
[sourceImage drawInRect:thumbnailRect];
newImage = UIGraphicsGetImageFromCurrentImageContext();
if(newImage == nil)
{
NSLog(#"could not scale image");
}
//pop the context to get back to the default
UIGraphicsEndImageContext();
return newImage;
}
On the line where your begin the UIGraphicsImageContext, use UIGraphicsBeginImageContextWithOptions instead of UIGraphicsBeginImageContext. Try something like this:
UIGraphicsBeginImageContextWithOptions(targetSize, NO, 0.0)
Notice the three parameters passed above, I'll go through them in order:
targetSize is the size of the image measured in points (not pixels)
NO is a BOOL value (could be YES or NO) that indicates whether the image is 100% opaque or not. Setting this to NO will preserve transparency and create an alpha channel to handle transparency.
The most important part of the above code is the final parameter, 0.0. This is the image scale factor that will be applied. Specifying the value to 0.0 sets the scale factor of the current device's screen. This means that the quality will be preserved, and look especially good on Retina Displays.
Here's the Apple Documentation on UIGraphicsBeginImageContextWithOptions.
You should use UIGraphicsBeginImageContextWithOptions(targetSize, false, 0.0) instead of UIGraphicsBeginImageContext(targetSize) so the correct scale factor gets applied to the bitmap.
Specifying 0.0 as scale factor, sets the scale factor to that from the device's main screen.
Calling only UIGraphicsBeginImageContext() is the same as calling
UIGraphicsBeginImageContextWithOptions(..) with a scale factor of 1.0
for more details take a look at: http://developer.apple.com/library/ios/documentation/UIKit/Reference/UIKitFunctionReference/Reference/reference.html#//apple_ref/c/func/UIGraphicsBeginImageContextWithOptions

drawRect Optimization

I am working on an application that allows the user to select an image (camera or gallery), then draw on that image with their finger. The area that they draw becomes the transparent portion of a mask. A second image is then drawn below the first image.
I've been working on improving performance, especially on the iPad 3 and I seem to have hit a wall. I am new to iOS and Objective-C.
So my question is: What can I do to improve the drawing performance of my application?
I used this tutorial as a starting point for my code.
First I draw to a cache context:
- (void) drawToCache {
CGRect dirtyPoint1;
CGRect dirtyPoint2;
UIColor *color;
if (point1.x > -1){
hasDrawn = YES;
maskContext = CGLayerGetContext(maskLayer);
blackBackgroundContext = CGLayerGetContext(blackBackgroundLayer);
if (!doUndo){
color = [UIColor whiteColor];
CGContextSetBlendMode(maskContext, kCGBlendModeColor);
}
else{
color = [UIColor clearColor];
CGContextSetBlendMode(maskContext, kCGBlendModeClear);
}
CGContextSetShouldAntialias(maskContext, YES);
CGContextSetRGBFillColor(blackBackgroundContext, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(blackBackgroundContext, self.bounds);
CGContextSetStrokeColorWithColor(maskContext, [color CGColor]);
CGContextSetLineCap(maskContext, kCGLineCapRound);
CGContextSetLineWidth(maskContext, brushSize);
double x0 = (point0.x > -1) ? point0.x : point1.x; //after 4 touches we should have a back anchor point, if not, use the current anchor point
double y0 = (point0.y > -1) ? point0.y : point1.y; //after 4 touches we should have a back anchor point, if not, use the current anchor point
double x1 = point1.x;
double y1 = point1.y;
double x2 = point2.x;
double y2 = point2.y;
double x3 = point3.x;
double y3 = point3.y;
double xc1 = (x0 + x1) / 2.0;
double yc1 = (y0 + y1) / 2.0;
double xc2 = (x1 + x2) / 2.0;
double yc2 = (y1 + y2) / 2.0;
double xc3 = (x2 + x3) / 2.0;
double yc3 = (y2 + y3) / 2.0;
double len1 = sqrt((x1-x0) * (x1-x0) + (y1-y0) * (y1-y0));
double len2 = sqrt((x2-x1) * (x2-x1) + (y2-y1) * (y2-y1));
double len3 = sqrt((x3-x2) * (x3-x2) + (y3-y2) * (y3-y2));
double k1 = len1 / (len1 + len2);
double k2 = len2 / (len2 + len3);
double xm1 = xc1 + (xc2 - xc1) * k1;
double ym1 = yc1 + (yc2 - yc1) * k1;
double xm2 = xc2 + (xc3 - xc2) * k2;
double ym2 = yc2 + (yc3 - yc2) * k2;
double smooth_value = 0.5;
float ctrl1_x = xm1 + (xc2 - xm1) * smooth_value + x1 - xm1;
float ctrl1_y = ym1 + (yc2 - ym1) * smooth_value + y1 - ym1;
float ctrl2_x = xm2 + (xc2 - xm2) * smooth_value + x2 - xm2;
float ctrl2_y = ym2 + (yc2 - ym2) * smooth_value + y2 - ym2;
CGContextMoveToPoint(maskContext, point1.x, point1.y);
CGContextAddCurveToPoint(maskContext, ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, point2.x, point2.y);
CGContextStrokePath(maskContext);
dirtyPoint1 = CGRectMake(point1.x-(brushSize/2), point1.y-(brushSize/2), brushSize, brushSize);
dirtyPoint2 = CGRectMake(point2.x-(brushSize/2), point2.y-(brushSize/2), brushSize, brushSize);
[self setNeedsDisplayInRect:CGRectUnion(dirtyPoint1, dirtyPoint2)];
}
}
And my drawRect:
- (void)drawRect:(CGRect)rect {
CGRect imageRect = CGRectMake(0, 0, 1024, 668);
CGSize size = CGSizeMake(1024, 668);
//Draw the user touches to the context, this creates a black and white image to convert into a mask
UIGraphicsBeginImageContext(size);
CGContextSetShouldAntialias(UIGraphicsGetCurrentContext(), YES);
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), imageRect, blackBackgroundLayer);
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), imageRect, maskLayer);
CGImageRef maskRef = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
UIGraphicsEndImageContext();
//Make the mask
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), NULL, false);
//Mask the user image
CGImageRef masked = CGImageCreateWithMask([self.original CGImage], mask);
UIGraphicsBeginImageContext(size);
CGContextDrawLayerInRect(UIGraphicsGetCurrentContext(), imageRect, backgroundTextureLayer);
CGImageRef background = CGBitmapContextCreateImage(UIGraphicsGetCurrentContext());
UIGraphicsEndImageContext();
//Flip the context so everything is right side up
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), 0, 668);
CGContextScaleCTM(UIGraphicsGetCurrentContext(), 1.0, -1.0);
//Draw the background on the context
CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, background);
//Draws the mask on the context
CGContextDrawImage(UIGraphicsGetCurrentContext(), imageRect, masked);
//Release everything to prevent memory leaks
CGImageRelease(background);
CGImageRelease(maskRef);
CGImageRelease(mask);
CGImageRelease(masked);
}
I spent yesterday researching these questions:
How can I improve CGContextFillRect and CGContextDrawImage performance
CGContextDrawImage very slow on iPhone 4
Drawing in CATiledLayer with CoreGraphics CGContextDrawImage
iPhone: How do I add layers to UIView's root layer to display an image with the content property?
iPhone CGContextDrawImage and UIImageJPEGRepresentation drastically slowing down application
Any help is appreciated!
Thanks,
Kevin
I can give you some tips but you're going to need to experiment to see where the problems are (using Instruments):
do not assume you need to draw the whole view - when you need to dirty some area, use setNeedsDisplayInRect. Then in the drawRect only update the dirty area.
the above means your maskRef is much smaller and easier to construct
instead of '[self.original CGImage]', create a smaller subregion image using CGImageCreateWithImageInRect(), which lets you select a subsection of an image, then use that smaller image in conjunction with the mask.

CGContextDrawAngleGradient?

Dipping my feet into some more Core Graphics drawing, I'm attempting to recreate a wicked looking metallic knob, and I've landed on what is probably a show-stopping issue.
There doesn't seem to be any way to draw angle gradients in Core Graphics. I see there's CGContextDrawRadialGradient() and CGContextDrawLinearGradient(), but there's nothing that I see that would allow me to draw an angle gradient. Does anyone know of a workaround, or a bit of framework hidden away somewhere to accomplish this without pre-rendering the knob into an image file?
AngleGradientKnob http://dl.dropbox.com/u/3009808/AngleGradient.png.
This is kind of thrown together, but it's the approach I'd probably take. This is creating an angle gradient by drawing it directly into a bitmap using some simple trig, then clipping it to a circle. I create a grid of memory using a grayscale colorspace, calculate the angle from a given point to the center, and then color that based on a periodic function, running between 0 to 255. You could of course expand this to do RGBA color as well.
Of course you'd cache this and play with the math to get the colors you want. This currently runs all the way from black to white, which doesn't look as good as you'd like.
- (void)drawRect:(CGRect)rect {
CGImageAlphaInfo alphaInfo = kCGImageAlphaNone;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
size_t components = CGColorSpaceGetNumberOfComponents( colorSpace );
size_t width = 100;
size_t height = 100;
size_t bitsPerComponent = 8;
size_t bytesPerComponent = bitsPerComponent / 8;
size_t bytesPerRow = width * bytesPerComponent * components;
size_t dataLength = bytesPerRow * height;
uint8_t data[dataLength];
CGContextRef imageCtx = CGBitmapContextCreate( &data, width, height, bitsPerComponent,
bytesPerRow, colorSpace, alphaInfo );
NSUInteger offset = 0;
for (NSUInteger y = 0; y < height; ++y) {
for (NSUInteger x = 0; x < bytesPerRow; x += components) {
CGFloat opposite = y - height/2.;
CGFloat adjacent = x - width/2.;
if (adjacent == 0) adjacent = 0.001;
CGFloat angle = atan(opposite/adjacent);
data[offset] = abs((cos(angle * 2) * 255));
offset += components * bytesPerComponent;
}
}
CGImageRef image = CGBitmapContextCreateImage(imageCtx);
CGContextRelease(imageCtx);
CGColorSpaceRelease(colorSpace);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect buttonRect = CGRectMake(100, 100, width, width);
CGContextAddEllipseInRect(ctx, buttonRect);
CGContextClip(ctx);
CGContextDrawImage(ctx, buttonRect, image);
CGImageRelease(image);
}
To expand on what's in the comments to the accepted answer, here's the code for generating an angle gradient using Core Image. This should work in iOS 8 or later.
// generate a dummy image of the required size
UIGraphicsBeginImageContextWithOptions(CGSizeMake(256.0, 256.0), NO, [[UIScreen mainScreen] scale]);
CIImage *dummyImage = [CIImage imageWithCGImage:UIGraphicsGetImageFromCurrentImageContext().CGImage];
// define the kernel algorithm
NSString *kernelString = #"kernel vec4 circularGradientKernel(__color startColor, __color endColor, vec2 center, float radius) { \n"
" vec2 point = destCoord() - center;"
" float rsq = point.x * point.x + point.y * point.y;"
" float theta = mod(atan(point.y, point.x), radians(360.0));"
" return (rsq < radius*radius) ? mix(startColor, endColor, 0.5+0.5*cos(4.0*theta)) : vec4(0.0, 0.0, 0.0, 1.0);"
"}";
// initialize a Core Image context and the filter kernel
CIContext *context = [CIContext contextWithOptions:nil];
CIColorKernel *kernel = [CIColorKernel kernelWithString:kernelString];
// argument array, corresponding to the first line of the kernel string
NSArray *args = #[ [CIColor colorWithRed:0.5 green:0.5 blue:0.5],
[CIColor colorWithRed:1.0 green:1.0 blue:1.0],
[CIVector vectorWithCGPoint:CGPointMake(CGRectGetMidX(dummyImage.extent),CGRectGetMidY(dummyImage.extent))],
[NSNumber numberWithFloat:200.0]];
// apply the kernel to our dummy image, and convert the result to a UIImage
CIImage *ciOutputImage = [kernel applyWithExtent:dummyImage.extent arguments:args];
CGImageRef cgOutput = [context createCGImage:ciOutputImage fromRect:ciOutputImage.extent];
UIImage *gradientImage = [UIImage imageWithCGImage:cgOutput];
CGImageRelease(cgOutput);
This generates the following image: