Is it possible to draw Graphics with CGContext and save them into a NSMutableArray? - objective-c

At first I draw graphics:
-(void)drawRect: (CGRect)rect{
//Circle
circle = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(circle, [UIColor darkGrayColor].CGColor);
CGContextFillRect(circle,CGRectMake(10,10, 50, 50));
//rectangle
rectangle = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(rectangle, [UIColor darkGrayColor].CGColor);
CGContextFillRect(rectangle, CGRectMake(10, 10, 50, 50));
//square
square = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(square, [UIColor darkGrayColor].CGColor);
CGContextFillRect(square, CGRectMake(100, 100, 25, 25));
//triangle
triangle = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor(triangle, [UIColor darkGrayColor].CGColor);
CGPoint points[6] = { CGPointMake(100, 200), CGPointMake(150, 250),
CGPointMake(150, 250), CGPointMake(50, 250),
CGPointMake(50, 250), CGPointMake(100, 200) };
CGContextStrokeLineSegments(triangle, points, 6);
}
And now I want to put the graphics into an Array:
-(void)viewDidLoad {
grafics = [[NSMutableArray alloc]initWithObjects: circle,rectangle,square,triangle];
[self.navigationItem.title = #"Shape"];
[super viewDidLoad];
}
The problem is, that I have to make a cast thereby the objects fit into this array. Another problem is that the UITableViewController does not load the array into the rows. There is nothing wrong with the tableview, but a failure with handling CGContext. What I am doing wrong?

drawRect: is a method on UIView used to draw the view itself, not to pre-create graphic objects.
Since it seems that you want to create shapes to store them and draw later, it appears reasonable to create the shapes as UIImage and draw them using UIImageView. UIImage can be stored directly in an NSArray.
To create the images, do the following (on the main queue; not in drawRect:):
1) create a bitmap context
UIGraphicsBeginImageContextWithOptions(size, opaque, scale);
2) get the context
CGContextRef context = UIGraphicsGetCurrentContext();
3) draw whatever you need
4) export the context into an image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
5) destroy the context
UIGraphicsEndImageContext();
6) store the reference to the image
[yourArray addObject:image];
Repeat for each shape you want to create.
For details see the documentation for the above mentioned functions. To get a better understanding of the difference between drawing in drawRect: and in arbitrary place in your program and of working with contexts in general, I would recommend you read the Quartz2D Programming Guide, especially the section on Graphics Contexts.

First, UIGraphicsGetCurrentContext returns CGContextRef, which is not an object and cannot be stored directly in a collection (edit: this is not really true, see the comments). Technically speaking you could wrap the context in an NSValue or turn it into an UIImage:
CGImageRef imageRef = CGBitmapContextCreateImage(context);
UIImage *img = [UIImage imageWithCGImage:imageRef];
The resulting UIImage is an object and can be stored in a collection just fine. But the code sample does not make sense in the big picture. The -drawRect: method is meant to draw your component, not create some graphics to be used later. And -viewDidLoad is a UIViewController method, not a UIView one. Could you edit the question and describe what you are trying to do?

Related

Xcode making a pdf, trying to round corners

I am making a pdf in an iPad app. Now i can make the pdf however want to add a picture with a rounded corner border. For example to achieve the effect i want on the border on a simple view item i use the following code.
self.SaveButtonProp.layer.cornerRadius=8.0f;
self.SaveButtonProp.layer.masksToBounds=YES;
self.SaveButtonProp.layer.borderColor=[[UIColor blackColor]CGColor];
self.SaveButtonProp.layer.borderWidth= 1.0f;
With the pdf i am using the following method to add the picture with the border to the pdf.
CGContextRef currentContext = UIGraphicsGetCurrentContext();
UIImage * demoImage = [UIImage imageWithData : Image];
UIColor *borderColor = [UIColor blackColor];
CGRect rectFrame = CGRectMake(20, 125, 200, 200);
[demoImage drawInRect:rectFrame];
CGContextSetStrokeColorWithColor(currentContext, borderColor.CGColor);
CGContextSetLineWidth(currentContext, 2);
CGContextStrokeRect(currentContext, rectFrame);
How do i round the corners?
Thanks
While drawing you can set clipping masks. For example, it's relatively easy to create a Bezier path with the shape of a rounded rectangle and apply that as clipping mask to your graphics context. Everything subsequently drawn will be clipped.
If you want remove the clipping mask later (for example because you have an image with rounded corners but follow that by other elements) you'll have to save the graphic state first, then apply your clipping mask and restore the graphics state when you're done with your rounded corners.
You can see actual code that comes pretty close to what I think you need here:
UIImage with rounded corners
You can use a method to get any UIView/UIImageView to PDF NSData:
UIImageView *imageView = [[UIImageView alloc] initWithImage:image];
NSData *data = [self makePDFfromView:imageView];
Method:
- (NSData *)makePDFfromView:(UIView *)view
{
NSMutableData *pdfData = [NSMutableData data];
UIGraphicsBeginPDFContextToData(pdfData, view.bounds, nil);
UIGraphicsBeginPDFPage();
CGContextRef pdfContext = UIGraphicsGetCurrentContext();
[view.layer renderInContext:pdfContext];
UIGraphicsEndPDFContext();
return pdfData;
}
Maybe you can change or use this code to help you with your problem.

setNeedsDisplay for UIImage doesn't work

I am trying to draw a line on an UIImage with the help of a dataBuffer when a button gets touched, but the drawRect: .. methode doesn´t get called. (so, the line i want to draw does't appear)
I really don't know where the problem could be, so I posted quite a bit more code. Sorry about that.
By the way, I am working on an viewbasedapplication.
here is my code :
-(void) viewDidLoad{
UIImage *image = [UIImage imageNamed:#"playView2.jpg"];
[playView setImage:image];
[image retain];
offScreenBuffer = [self setupBuffer];
}
- (IBAction)buttonTouched:(id)sender {
[self drawToBuffer];
[playView setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect{
CGImageRef cgImage = CGBitmapContextCreateImage(offScreenBuffer);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
CGImageRelease(cgImage);
[uiImage drawInRect: playView.bounds];
[uiImage release];
}
-(void)drawToBuffer {
// Red Gr Blu Alpha
CGFloat color[4] = {1.0, 1.0, 0.0, 1.0};
CGContextSetRGBStrokeColor(offScreenBuffer, color[0], color[1], color[2], color[3]);
CGContextBeginPath(offScreenBuffer);
CGContextSetLineWidth(offScreenBuffer, 10.0);
CGContextSetLineCap(offScreenBuffer, kCGLineCapRound);
CGContextMoveToPoint(offScreenBuffer, 30,40);
CGContextAddLineToPoint(offScreenBuffer, 150,150);
CGContextDrawPath(offScreenBuffer, kCGPathStroke);
}
-(CGContextRef)setupBuffer{
CGSize size = playView.bounds.size;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
return context;
}
Where is this code? viewDidLoad is a view controller method, drawRect is a view method. Whatever playView is (presumably a UIImageView?) needs to be a custom UIView subclass with your drawRect inside it, and you'll have to pass the various items to that from your view controller.
I know this is an old question but still, someone might come across here:
the same (or very similiar) question is correctly answered here:
drawRect not being called in my subclass of UIImageView
UIImageView class (and it's sublasses) don't call drawRect if you call their setNeedsDisplay.

Image Cropping API for iOS

Is there any cropping image API for objective C that crops images dynamically in Xcode project? Please provide some tricks or techniques how could I crop camera images in iPhone.
You can use below simple code to crop an image. You have to pass the image and the CGRect which is the cropping area. Here, I crop image so that I get center part of original image and returned image is square.
// Returns largest possible centered cropped image.
- (UIImage *)centerCropImage:(UIImage *)image
{
// Use smallest side length as crop square length
CGFloat squareLength = MIN(image.size.width, image.size.height);
// Center the crop area
CGRect clippedRect = CGRectMake((image.size.width - squareLength) / 2, (image.size.height - squareLength) / 2, squareLength, squareLength);
// Crop logic
CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], clippedRect);
UIImage * croppedImage = [UIImage imageWithCGImage:imageRef];
CGImageRelease(imageRef);
return croppedImage;
}
EDIT - Swift Version
let imageView = UIImageView(image: image)
imageView.contentMode = .scaleAspectFill
imageView.clipsToBounds = true
All these solutions seem quite complicated and many of them actually degrade the quality the image.
You can do much simpler using UIImageView's out of the box methods.
Objective-C
self.imageView.contentMode = UIViewContentModeScaleAspectFill;
[self.imageView setClipsToBounds:YES];
[self.imageView setImage:img];
This will crop your image based on the dimensions you've set for your UIImageView (I've called mine imageView here).
It's that simple and works much better than the other solutions.
You can use CoreGraphics framework to cropping image dynamically.
Here is a example code part of dynamic image crop. I hope this will be helpful for you.
- (void)drawMaskLineSegmentTo:(CGPoint)ptTo withMaskWidth:(CGFloat)maskWidth inContext:(NXMaskDrawContext)context{
if (context == nil)
return;
if (context.count <= 0){
[context addObject:[NSValue valueWithCGPoint:ptTo]];
return;
}
CGPoint ptFrom = [context.lastObject CGPointValue];
UIGraphicsBeginImageContext(self.maskImage.size);
[self.maskImage drawInRect:CGRectMake(0, 0, self.maskImage.size.width, self.maskImage.size.height)];
CGContextRef graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetRGBStrokeColor(graphicsContext, 1, 1, 1, 1);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(self.displayableMaskImage.size);
[self.displayableMaskImage drawInRect:CGRectMake(0, 0, self.displayableMaskImage.size.width, self.displayableMaskImage.size.height)];
graphicsContext = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(graphicsContext, kCGBlendModeCopy);
CGContextSetStrokeColorWithColor(graphicsContext, self.displayableMaskColor.CGColor);
CGContextSetLineWidth(graphicsContext, maskWidth);
CGContextSetLineCap(graphicsContext, kCGLineCapRound);
CGContextMoveToPoint(graphicsContext, ptFrom.x, ptFrom.y);
CGContextAddLineToPoint(graphicsContext, ptTo.x, ptTo.y);
CGContextStrokePath(graphicsContext);
self.displayableMaskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[context addObject:[NSValue valueWithCGPoint:ptTo]];
}
Xcode 5, iOS 7, and 4-inch screen example: Here is an open source example of a
SimpleImageCropEditor (Project Zip and Source Code Example. You can load the Image Crop Editor as a Modal View Controller and reuse. Look at the code and please leave constructive comments concerning if this example code answers the question "Image Cropping API for iOS".
Demonstrates, is example source Objective-C code, use of UIImagePickerController, #protocol, UIActionSheet, UIScrollView, UINavigationController, MFMailComposeViewController, and UIGestureRecognizer.

Turning an NSImage* into a CGImageRef?

Is there an easy way to do this that works in 10.5?
In 10.6 I can use nsImage CGImageForProposedRect: NULL context: NULL hints: NULL
If I'm not using 1b black and white images (Like Group 4 TIFF), I can use bitmaps, but cgbitmaps seem to not like that setup... Is there a general way of doing this?
I need to do this because I have an IKImageView that seems to only want to add CGImages, but all I've got are NSImages. Currently, I'm using a private setImage:(NSImage*) method that I'd REALLY REALLY rather not be using...
Found the following solution on this page:
NSImage* someImage;
// create the image somehow, load from file, draw into it...
CGImageSourceRef source;
source = CGImageSourceCreateWithData((CFDataRef)[someImage TIFFRepresentation], NULL);
CGImageRef maskRef = CGImageSourceCreateImageAtIndex(source, 0, NULL);
All the methods seem to be 10.4+ so you should be fine.
[This is the long way around. You should only do this if you're still supporting an old version of OS X. If you can require 10.6 or later, just use the CGImage method instead.]
Create a CGBitmapContext.
Create an NSGraphicsContext for the bitmap context.
Set the graphics context as the current context.
Draw the image.
Create the CGImage from the contents of the bitmap context.
Release the bitmap context.
As of Snow Leopard, you can just ask the image to create a CGImage for you, by sending the NSImage a CGImageForProposedRect:context:hints: message.
Here's a more detailed answer for using - CGImageForProposedRect:context:hints::
NSImage *image = [NSImage imageNamed:#"image.jpg"];
NSRect imageRect = NSMakeRect(0, 0, image.size.width, image.size.height);
CGImageRef cgImage = [image CGImageForProposedRect:&imageRect context:NULL hints:nil];
Just did this using CGImageForProposedRect:context:hints:...
NSImage *image; // an image
NSGraphicsContext *context = [NSGraphicsContext currentContext];
CGRect imageCGRect = CGRectMake(0, 0, image.size.width, image.size.height);
NSRect imageRect = NSRectFromCGRect(imageCGRect);
CGImageRef imageRef = [image CGImageForProposedRect:&imageRect context:context hints:nil];
As of 2021, CALayers contents can directly be feed with NSImage but you still may get an [Utility] unsupported surface format: LA08 warning.
After a couple of days research and testing around i found out that this Warning is triggered if you created an NSView with backingLayer, aka CALayer and just used an NSImage to feed its contents. This alone is not much of a problem. But if you try to render it via [self.layer renderInContext:ctx], each rendering will trigger the warning again.
To make use simple, i created an NSImage extension to fix this..
#interface NSImage (LA08FIXExtension)
-(id)CGImageRefID;
#end
#implementation NSImage (LA08FIXExtension)
-(id)CGImageRefID {
NSSize size = [self size];
NSRect rect = NSMakeRect(0, 0, size.width, size.height);
CGImageRef ref = [self CGImageForProposedRect:&rect context:[NSGraphicsContext currentContext] hints:NULL];
return (__bridge id)ref;
}
#end
see how it does not return an CGImageRef directly but a bridged cast to id..! This does the trick.
Now you can use it like..
CALayer *somelayer = [CALayer layer];
somelayer.frame = ...
somelayer.contents = [NSImage imageWithName:#"SomeImage"].CGImageRefID;
PS: posted her because if you got that problem also, google will lead you to this Thread of answers anyway, and all above answers inspired me for this fix.

Image Context produces artifacts when compositing UIImages

I'm trying to overlay a custom semi-transparent image over a base image. The overlay image is stretchable and created like this:
[[UIImage imageNamed:#"overlay.png"] stretchableImageWithLeftCapWidth:5.0 topCapHeight:5.0]
Then I pass that off to a method that overlays it onto the background image for a button:
- (void)overlayImage:(UIImage *)overlay forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// clear context
CGContextClearRect(context, frame);
// draw images
[baseImage drawInRect:frame];
[overlay drawInRect:frame];// blendMode:kCGBlendModeNormal alpha:1.0];
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
The resulting overlaidImage looks mostly correct, it is the correct size, the alpha is blended correctly, etc. however it has vertical artifacts/noise.
UIImage artifacts example http://acaciatreesoftware.com/img/UIImage-artifacts.png
(example at http://acaciatreesoftware.com/img/UIImage-artifacts.png)
I tried clearing the context first and then turning off PNG compression--which reduces the artifacting some (completely on non stretched images I think).
Does anyone know a method for drawing stretchable UIImages with out this sort of artifacting happening?
So the answer is: Don't do this. Instead you can paint your overlay procedurally. Like so:
- (void)overlayWithColor:(UIColor *)overlayColor forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// draw background image
[baseImage drawInRect:frame];
// overlay color
CGContextSetFillColorWithColor(context, [overlayColor CGColor]);
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextFillRect(context, frame);
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
Are you being too miserly with your original image, and forcing it to stretch rather than shrink? I've found best results out of images that fit the same aspect ratio and were reduced in size. Might not solve your problem tho.