need assistance regarding growing & shrinking circle from centre in quartz-2d - objective-c

I am currently working on drawing app, in which have a slider to increase and decrease line width. I just want to do a simple thing that a circle in front of slider to present a width. I easily did that but its not growing and shrinking from centre, its growing and shrinking from top x, y, here is the code
- (UIImage *)circleOnImage:(int)size
{
UIGraphicsBeginImageContext(CGSizeMake(25, 25));
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] setFill];
CGContextTranslateCTM(ctx, 12, 12);//also try to change the coordinate but didn't work
CGRect circleRect = CGRectMake(0, 0, size, size);
CGContextFillEllipseInRect(ctx, circleRect);
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return retImage;
}

Try
CGContextTranslateCTM(ctx, 12.5, 12.5);
CGRect circleRect = CGRectMake(-size/2., -size/2., size, size);

Related

Lines drawn with core graphics that are set to the same width sometimes vary in size when drawn

Here is a picture of two lines drawn in a UITableViewCell with the same function, and same width, and color
As you can see the bottom line is a lot thicker than the other line.
The code I am using for drawing:
[CSDrawing drawLineWithColor:[UIColor blackColor] width:1.0 yPosition:1.0 rect:rect];
[CSDrawing drawLineWithColor:[UIColor blackColor] width:1.0 yPosition:CGRectGetMaxY(rect) - 3.0 rect:rect]; // draw a line on top and bottom
+(void)drawLineWithColor:(UIColor *)color width:(CGFloat)width yPosition:(CGFloat)yPosition rect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextMoveToPoint(context, 0.0, yPosition);
CGContextAddLineToPoint(context, CGRectGetMaxX(rect), yPosition);
CGContextSetStrokeColorWithColor(context, color.CGColor);
CGContextSetLineWidth(context, width);
CGContextStrokePath(context);
CGContextRestoreGState(context);
}
The problem was with the backgroundView being stretched to fit the content of the cell when the cell was being reused. When the cell was bigger the pixels were stretched. This is solved be setting the contentMode property to UIViewContentModeRedraw

JigSaw puzzle uiimage irregular cropping

Am developing jigsaw puzzle for iphone.
Here using the masking technique I have cropped the image into 9 peices. See the image below.
After cropping some portion of image is missing due to masking. I knew this is coz of loading those cropped images in square uiimageview.
My question is how to make it as complete cropped image without losing any portion of image and how to fit these pieces correctly to match with original one.
Build a set of masks corresponding to to each puzzle piece. Each mask should be the size of the original image and all black except for a white area with the position and shape of the puzzle piece. Also, maintain a bounding rectangle for each piece (a rectangle that minimally contains the piece in the mask image).
The way to not lose any of the original image is to arrange the masks (and the corresponding bounding rects) as a partition over the image.
Here's a link to some code that demonstrates how to apply a mask. Once the mask is applied, crop the masked image to the bounding rectangle using code like here and elsewhere.
I am also thinking that divided original image with masking but it might be bad idea for us and also complicated to manage it. So For user who is beginner for jigsaw puzzle then This is the best question/answer
and also you can get source code of jigsaw puzzle game from git.
Please hats off this man (#"Guntis Treulands") who solve your problem. I know this should be not an answer but should be comment but If I will put it as comment then may be user who have problem with jigsaw puzzle he/she can not find it easily so. I am putting it as an answer.
//Create our colorspaces
imageColorSpace = CGColorSpaceCreateDeviceRGB();
maskColorSpace = CGColorSpaceCreateDeviceGray();
provider=CGDataProviderCreateWithCFData((__bridge CFDataRef)self.puzzleData);
image=CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
//Resize the puzzle image
context = CGBitmapContextCreate(NULL, kPuzzleSize, kPuzzleSize, 8, 0, imageColorSpace, kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, kPuzzleSize, kPuzzleSize), image);
CGImageRelease(image);
image = CGBitmapContextCreateImage(context);
CGContextRelease(context);
//Create the image view with the puzzle image
self.imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, kPuzzleSize, kPuzzleSize)];
[self.imageView setImage:[UIImage imageWithCGImage:image]];
//Create the puzzle pieces (note that pieces are rotated to the puzzle orientation in order to minimize the number of graphic operations when creating the puzzle images)
for(i = 0; i < appDelegate().puzzleSize * appDelegate().puzzleSize; ++i)
{
//Recreate the piece view
[pieces[i] removeFromSuperview];
pieces[i] = [[CJPieceView alloc] initWithFrame:CGRectMake(0, 0, kPieceSize, kPieceSize) index:i];
[pieces[i] setTag:-1];
//Load puzzle piece mask image
UIImage *maskimage=[self.arrmaskImages objectAtIndex:i];
NSData *dataMaskImage=UIImagePNGRepresentation(maskimage);
provider=CGDataProviderCreateWithCFData((__bridge CFDataRef)dataMaskImage);
tile = CGImageCreateWithPNGDataProvider(provider, NULL, true, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
mask = CGImageCreateCopyWithColorSpace(tile, maskColorSpace);
CGImageRelease(tile);
context = CGBitmapContextCreate(NULL, kPieceSize / kPieceShadowFactor, kPieceSize / kPieceShadowFactor, 8, 0, imageColorSpace, kCGImageAlphaPremultipliedFirst);
CGContextClipToMask(context, CGRectMake(0, 0, kPieceSize / kPieceShadowFactor, kPieceSize / kPieceShadowFactor), mask);
CGContextSetRGBFillColor(context, 0.0, 0.0, 0.0, 1.0);
CGContextFillRect(context, CGRectMake(0, 0, kPieceSize / kPieceShadowFactor, kPieceSize / kPieceShadowFactor));
shadow = CGBitmapContextCreateImage(context);
CGContextRelease(context);
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, kPieceSize, kPieceSize)];
[imageView setImage:[UIImage imageWithCGImage:shadow]];
[imageView setAlpha:kPieceShadowOpacity];
[imageView setUserInteractionEnabled:NO];
[pieces[i] addSubview:imageView];
CGImageRelease(shadow);
//Create image view with piece image and add it to the piece view
context = CGBitmapContextCreate(NULL, kPieceSize, kPieceSize, 8, 0, imageColorSpace, kCGImageAlphaPremultipliedFirst);
CGRect rectPiece= CGRectMake(fmodf(i, appDelegate().puzzleSize) * kPieceDistance, (floorf(i / appDelegate().puzzleSize)) * kPieceDistance, kPieceSize, kPieceSize);
[self.arrlocations addObject:[NSValue valueWithCGRect:rectPiece]];
CGContextTranslateCTM(context, (kPieceSize - kPieceDistance) / 2 - fmodf(i, appDelegate().puzzleSize) * kPieceDistance, (kPieceSize - kPieceDistance) / 2 - (appDelegate().puzzleSize - 1 - floorf(i / appDelegate().puzzleSize)) * kPieceDistance);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
subImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
tile = CGImageCreateWithMask(subImage, mask);
imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, kPieceSize, kPieceSize)];
[imageView setImage:[UIImage imageWithCGImage:tile]];
[imageView setUserInteractionEnabled:NO];
[pieces[i] addSubview:imageView];
CGImageRelease(tile);
CGImageRelease(subImage);
DLog(#"%f", pieces[i].frame.size.width);
pieces[i].transform=CGAffineTransformScale(CGAffineTransformIdentity, kTransformScale, kTransformScale);
DLog(#"%f %f",kTransformScale, pieces[i].frame.size.width);
//Release puzzle piece mask;
CGImageRelease(mask);
}
//Clean up
CGColorSpaceRelease(maskColorSpace);
CGColorSpaceRelease(imageColorSpace);
CGImageRelease(image);

UIScrollview changes UIButton's image quality on zoom out

I've got a large display area that can be panned and zoomed to view different objects. The problem that I'm running into is that the quality of the PNG images UIButton becomes somewhat degraded if I'm zoomed out (however it is back to normal when I zoom back in to 100%). It almost looks as if the image becomes oversharpened. Is this something that I'm going to have to live with, or is there a way to get rid of this grainy edge effect? The aspect ratio of the images are always 1:1, by the way.
I was able to solve this by using the answer found here in my scrollViewDidEndZooming method. Here is my code:
Resize function
- (UIImage *)resizeImage:(UIImage*)image newSize:(CGSize)newSize {
CGRect newRect = CGRectIntegral(CGRectMake(0, 0, newSize.width, newSize.height));
CGImageRef imageRef = image.CGImage;
UIGraphicsBeginImageContextWithOptions(newSize, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
// Set the quality level to use when rescaling
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, newSize.height);
CGContextConcatCTM(context, flipVertical);
// Draw into the context; this scales the image
CGContextDrawImage(context, newRect, imageRef);
// Get the resized image from the context and a UIImage
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
UIImage *newImage = [UIImage imageWithCGImage:newImageRef];
CGImageRelease(newImageRef);
UIGraphicsEndImageContext();
return newImage;
}
ScrollView Method
(Widget is a UIViewController subclass which contains a button and a "widgetImage" which stores the full resolution of the image that the button should display)
- (void)scrollViewDidEndZooming:(UIScrollView *)scrollView withView:(UIView *)view atScale:(float)scale
{
for(Widget *theWidget in widgets){
UIImage *newScaledImage = [self resizeImage:theWidget.widgetImage newSize:CGSizeMake(theWidget.view.frame.size.width * scale, theWidget.view.frame.size.height * scale)];
[theWidget.widgetButton setImage:newScaledImage forState:UIControlStateNormal];
// theWidget.widgetButton.currentImage = newScaledImage;
}
}

How to fill gaps in imageview (not covered by image) with color?

I want to create an image of fixed size e.g. 612 by 612. I am using an image picker to select photos from my iphone. So in order to ensure that all the photos fit to the 612 by 612 size without distortion, I am using the following method to re-scale the photo so that they conform to the size of 612 by 612. However as a result, blank spaces might be created in the final image. (see example below)
I am using the following code to scale my image (of fixed size 612 by 612)
//Scale the image to fit to imageview
UIImage *image = [self scaleImage:img toRectSize:CGRectMake(0, 0, 612, 612)];
//Method to scale image
- (UIImage *)scaleImage:(UIImage *)img toRectSize:(CGRect)screenRect
{
UIGraphicsBeginImageContext(screenRect.size);
float hfactor = img.size.width / screenRect.size.width;
float vfactor = img.size.height / screenRect.size.height;
float factor = MAX(hfactor, vfactor);
float newWidth = img.size.width / factor;
float newHeight = img.size.height / factor;
float leftOffset = (screenRect.size.width - newWidth) / 2;
float topOffset = (screenRect.size.height - newHeight) / 2;
CGRect newRect = CGRectMake(leftOffset, topOffset, newWidth, newHeight);
[img drawInRect:newRect blendMode:kCGBlendModePlusDarker alpha:1];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
As mentioned, as the image is sometimes not a full square, I get a result like what you see below:
How can I feel up the white spaces in the image with black color?
One way is to set the backgroundColor of UIImageView to blackColor.
Another way is to fill the rect with blackColor while you scale the image.
- (UIImage *)scaleImage:(UIImage *)img toRectSize:(CGRect)screenRect {
...
CGRect newRect = CGRectMake(leftOffset, topOffset, newWidth, newHeight);
// Fill the original rect with black color
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(context, [[UIColor blackColor] CGColor]);
CGContextFillRect(context, screenRect);
[img drawInRect:newRect blendMode:kCGBlendModeNormal alpha:1];
...
}
Note that the blendMode in drawInRect:blendMode:alpha: method is set to kCGBlendModeNormal. If you set some other blend modes you will get undesired results. For example, if you set the blend mode to kCGBlendModePlusDarker and fill the rect with blackColor then the entire image will become black.
Set the background color on your UIImageView to black and you'll get what you want

How to draw a NSImage like images in NSButtons (with a deepness)?

Is there any way to draw an NSImage like images in NSButtons or other cocoa interface elements?
Here are examples:
Apple uses pdf's with black icons:
If you simply want this effect to be applied when you use your own images in a button, use [myImage setTemplate:YES]. There is no built-in way to draw images with this effect outside of a button that has the style shown in your screenshots.
You can however replicate the effect using Core Graphics. If you look closely, the effect consists of a horizontal gradient, a white drop shadow and a dark inner shadow (the latter is the most difficult).
You could implement this as a category on NSImage:
//NSImage+EtchedDrawing.h:
#interface NSImage (EtchedImageDrawing)
- (void)drawEtchedInRect:(NSRect)rect;
#end
//NSImage+EtchedDrawing.m:
#implementation NSImage (EtchedImageDrawing)
- (void)drawEtchedInRect:(NSRect)rect
{
NSSize size = rect.size;
CGFloat dropShadowOffsetY = size.width <= 64.0 ? -1.0 : -2.0;
CGFloat innerShadowBlurRadius = size.width <= 32.0 ? 1.0 : 4.0;
CGContextRef c = [[NSGraphicsContext currentContext] graphicsPort];
//save the current graphics state
CGContextSaveGState(c);
//Create mask image:
NSRect maskRect = rect;
CGImageRef maskImage = [self CGImageForProposedRect:&maskRect context:[NSGraphicsContext currentContext] hints:nil];
//Draw image and white drop shadow:
CGContextSetShadowWithColor(c, CGSizeMake(0, dropShadowOffsetY), 0, CGColorGetConstantColor(kCGColorWhite));
[self drawInRect:maskRect fromRect:NSMakeRect(0, 0, self.size.width, self.size.height) operation:NSCompositeSourceOver fraction:1.0];
//Clip drawing to mask:
CGContextClipToMask(c, NSRectToCGRect(maskRect), maskImage);
//Draw gradient:
NSGradient *gradient = [[[NSGradient alloc] initWithStartingColor:[NSColor colorWithDeviceWhite:0.5 alpha:1.0]
endingColor:[NSColor colorWithDeviceWhite:0.25 alpha:1.0]] autorelease];
[gradient drawInRect:maskRect angle:90.0];
CGContextSetShadowWithColor(c, CGSizeMake(0, -1), innerShadowBlurRadius, CGColorGetConstantColor(kCGColorBlack));
//Draw inner shadow with inverted mask:
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef maskContext = CGBitmapContextCreate(NULL, CGImageGetWidth(maskImage), CGImageGetHeight(maskImage), 8, CGImageGetWidth(maskImage) * 4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
CGContextSetBlendMode(maskContext, kCGBlendModeXOR);
CGContextDrawImage(maskContext, maskRect, maskImage);
CGContextSetRGBFillColor(maskContext, 1.0, 1.0, 1.0, 1.0);
CGContextFillRect(maskContext, maskRect);
CGImageRef invertedMaskImage = CGBitmapContextCreateImage(maskContext);
CGContextDrawImage(c, maskRect, invertedMaskImage);
CGImageRelease(invertedMaskImage);
CGContextRelease(maskContext);
//restore the graphics state
CGContextRestoreGState(c);
}
#end
Example usage in a view:
- (void)drawRect:(NSRect)dirtyRect
{
[[NSColor colorWithDeviceWhite:0.8 alpha:1.0] set];
NSRectFill(self.bounds);
NSImage *image = [NSImage imageNamed:#"MyIcon.pdf"];
[image drawEtchedInRect:self.bounds];
}
This would give you the following result (shown in different sizes):
You may need to experiment a bit with the gradient colors and offset/blur radius of the two shadows to get closer to the original effect.
If you don't mind calling a private API, you can let the operating system (CoreUI) do the shading for you. You need a few declarations:
typedef CFTypeRef CUIRendererRef;
extern void CUIDraw(CUIRendererRef renderer, CGRect frame, CGContextRef context, CFDictionaryRef object, CFDictionaryRef *result);
#interface NSWindow(CoreUIRendererPrivate)
+ (CUIRendererRef)coreUIRenderer;
#end
And for the actual drawing:
CGRect drawRect = CGRectMake(x, y, width, height);
CGImageRef cgimage = your_image;
CFDictionaryRef dict = (CFDictionaryRef) [NSDictionary dictionaryWithObjectsAndKeys:
#"backgroundTypeRaised", #"backgroundTypeKey",
[NSNumber numberWithBool:YES], #"imageIsGrayscaleKey",
cgimage, #"imageReferenceKey",
#"normal", #"state",
#"image", #"widget",
[NSNumber numberWithBool:YES], #"is.flipped",
nil];
CUIDraw ([NSWindow coreUIRenderer], drawRect, cg, dict, nil);
CGImageRelease (cgimage);
This will take the alpha channel of cgimage and apply the embossing effect as seen on toolbar buttons. You may or may not need the "is.flipped" line. Remove it if your result is upside-down.
There are a bunch of variations:
kCUIPresentationStateKey = kCUIPresentationStateInactive: The window is not active, the image will be lighter.
state = rollover: Only makes sense with the previous option. This means you are hovering over the image, the window is inactive, but the button is sensitive (click-through is enabled). It will become darker.
state = pressed: Occurs when the button is pressed. The icon gets slightly darker.
Bonus tip: To find out stuff like this, you can use the SIMBL plugin CUITrace. It prints out all the CoreUI invocations of a target app. This is a treasure trove if you have to draw your own native-looking UI.
Here's a much simpler solution: just create a cell and let it draw. No mucking around with private APIs or Core Graphics.
Code could look similar to the following:
NSButtonCell *buttonCell = [[NSButtonCell alloc] initImageCell:image];
buttonCell.bordered = YES;
buttonCell.bezelStyle = NSTexturedRoundedBezelStyle;
// additional configuration
[buttonCell drawInteriorWithFrame: someRect inView:self];
You can use different cells and configurations depending on the look you want to have (eg. NSImageCell with NSBackgroundStyleDark if you want the inverted look in a selected table view row)
And as a bonus, it will automatically look correct on all versions of OS X.
To get to draw correctly within any rect, the CGContextDrawImage and CGContextFillRect for the inner mask must have the origin of (0,0). then when you draw the image for the inner shadow you can then reuse the mask rect. So ends up looking like:
CGRect cgRect = CGRectMake( 0, 0, maskRect.size.width, maskRect.size.height );
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef maskContext = CGBitmapContextCreate( NULL, CGImageGetWidth( maskImage ), CGImageGetHeight( maskImage ), 8, CGImageGetWidth( maskImage ) * 4, colorSpace, kCGImageAlphaPremultipliedLast );
CGColorSpaceRelease( colorSpace );
CGContextSetBlendMode( maskContext , kCGBlendModeXOR );
CGContextDrawImage( maskContext, cgRect, maskImage );
CGContextSetRGBFillColor( maskContext, 1.0, 1.0, 1.0, 1.0 );
CGContextFillRect( maskContext, cgRect );
CGImageRef invertedMaskImage = CGBitmapContextCreateImage( maskContext );
CGContextDrawImage( context, maskRect, invertedMaskImage );
CGImageRelease( invertedMaskImage );
CGContextRelease( maskContext );
CGContextRestoreGState( context );
You also have to leave a 1px border around the outside of the image or the shadows won't work correctly.