setNeedsDisplay for UIImage doesn't work - objective-c

I am trying to draw a line on an UIImage with the help of a dataBuffer when a button gets touched, but the drawRect: .. methode doesn´t get called. (so, the line i want to draw does't appear)
I really don't know where the problem could be, so I posted quite a bit more code. Sorry about that.
By the way, I am working on an viewbasedapplication.
here is my code :
-(void) viewDidLoad{
UIImage *image = [UIImage imageNamed:#"playView2.jpg"];
[playView setImage:image];
[image retain];
offScreenBuffer = [self setupBuffer];
}
- (IBAction)buttonTouched:(id)sender {
[self drawToBuffer];
[playView setNeedsDisplay];
}
- (void)drawRect:(CGRect)rect{
CGImageRef cgImage = CGBitmapContextCreateImage(offScreenBuffer);
UIImage *uiImage = [[UIImage alloc] initWithCGImage:cgImage];
CGImageRelease(cgImage);
[uiImage drawInRect: playView.bounds];
[uiImage release];
}
-(void)drawToBuffer {
// Red Gr Blu Alpha
CGFloat color[4] = {1.0, 1.0, 0.0, 1.0};
CGContextSetRGBStrokeColor(offScreenBuffer, color[0], color[1], color[2], color[3]);
CGContextBeginPath(offScreenBuffer);
CGContextSetLineWidth(offScreenBuffer, 10.0);
CGContextSetLineCap(offScreenBuffer, kCGLineCapRound);
CGContextMoveToPoint(offScreenBuffer, 30,40);
CGContextAddLineToPoint(offScreenBuffer, 150,150);
CGContextDrawPath(offScreenBuffer, kCGPathStroke);
}
-(CGContextRef)setupBuffer{
CGSize size = playView.bounds.size;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(NULL, size.width, size.height, 8, size.width*4, colorSpace, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(colorSpace);
return context;
}

Where is this code? viewDidLoad is a view controller method, drawRect is a view method. Whatever playView is (presumably a UIImageView?) needs to be a custom UIView subclass with your drawRect inside it, and you'll have to pass the various items to that from your view controller.

I know this is an old question but still, someone might come across here:
the same (or very similiar) question is correctly answered here:
drawRect not being called in my subclass of UIImageView
UIImageView class (and it's sublasses) don't call drawRect if you call their setNeedsDisplay.

Related

Trying to draw in UIView

Hi I am trying to draw some .png images stored in sandbox and the output the text in UIView. My code is:
-(void)setItemDetails:(ItemShow *)itmShow
{
if(theItem!=itmShow)
{
[theItem release];
theItem=itmShow;
[theItem retain];
}
UIImage *rImage=[UIImage imageNamed:#"years"];
[[UIColor blackColor] set];
[rImage drawInRect:CGRectMake(55.0, 22.0, 17.0, 17.0)];
[[UIColor brownColor] set];
[theItem.itemYear drawAtPoint:CGPointMake(7.0,19.0)
forWidth:100
withFont:[UIFont systemFontOfSize:17.0]
minFontSize:17.0
actualFontSize:NULL
lineBreakMode:UILineBreakModeTailTruncation
baselineAdjustment:UIBaselineAdjustmentAlignBaselines];
}
That after I call this method in viewDidLoad. Nothing happens. I can't see any images and text on canvas of UIView. What's wrong here?
That's right, it is exactly what's supposed to happen (nothing), because viewDidLoad is not the right place from which you do drawing in iOS. You need to implement a different method:
- (void)drawRect:(CGRect)rect {
CGContextRef myContext = UIGraphicsGetCurrentContext();
// Do your drawing in myContext
}
This method of your UIView implementation gets called to do the drawing. Trying to draw on the screen from about anywhere else is not going to produce the desired results.

iPhone4 iOS take screenshot of the view without NavBar and TabBar?

I got this code to be able to take a screenshot of the view.
UIGraphicsBeginImageContext(scrollView.bounds.size);
[scrollView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
NSData * data = UIImagePNGRepresentation(image);
However, even if I set the context to be 320x480, parts of the scroll view are still not shown. The view that the scrollview manages can perfectly fit within the 320x480 frame, but parts of it are covered by the status bar, navBar and TabBar.
I would like to take full screen (320x480) screenshot of the view with the parts of the view, covered by status bar, TabBar and NavBar visible. Any pointers on how to do this?
An extra question, which may be related: the resulting image is using x1 scale, and looks very blurry on the retina display, which scales takes a larger image and scales it down. This means I'll need to render the 640x960 screenshot to reproduce the original crisp quality. How would I go about doing that?
Thank you!
I found the following on this site: http://www.icodeblog.com/2009/07/27/1188/
UIGraphicsBeginImageContext(YourView.frame.size);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);
You may also want to check out this (Apple's example of how to make a screenshot): http://developer.apple.com/library/ios/#qa/qa1703/_index.html
First make a screenshot of the whole screen:
// Create a graphics context with the target size
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0.0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
// Render the layer hierarchy to the current context
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
// Retrieve the screenshot image
UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
Then crop it to the right size
CGImageRef subImageRef = CGImageCreateWithImageInRect(screenshot.CGImage, rect);
CGRect smallBounds = CGRectMake(0, 64, 320, 372); //You should remove the hard coded numbers
UIGraphicsBeginImageContext(smallBounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, smallBounds, subImageRef);
UIImage* cropped = [UIImage imageWithCGImage:subImageRef];
UIGraphicsEndImageContext();

Showing UIBezierPath on view

In one of my methods i have this code:
-(void)myMethod {
UIBezierPath *circle = [UIBezierPath
bezierPathWithOvalInRect:CGRectMake(75, 100, 200, 200)];
}
How do i get it to show on the view?
I tried addSubview but it gave me an incompatible type error because its expecting a UIView.
I'm sure this must be simple.
Thanks
Just thought I'd add that you don't have to necessarily draw this in a UIView's "drawRect:" method. You can draw it anywhere you'd like to provided you do it inside of a UIGraphics image context. I do this all of the time when I don't want to create a subclass of UIView. Here's a working example:
UIBezierPath *circle = [UIBezierPath
bezierPathWithOvalInRect:CGRectMake(75, 100, 200, 200)];
//you have to account for the x and y values of your UIBezierPath rect
//add the x to the width (75 + 200)
//add the y to the height (100 + 200)
UIGraphicsBeginImageContext(CGSizeMake(275, 300));
//this gets the graphic context
CGContextRef context = UIGraphicsGetCurrentContext();
//you can stroke and/or fill
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor lightGrayColor].CGColor);
[circle fill];
[circle stroke];
//now get the image from the context
UIImage *bezierImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *bezierImageView = [[UIImageView alloc]initWithImage:bezierImage];
Now just add the UIImageView as a subview.
Also, you can use this for other drawing too. Again, after a little bit of setup, it works just like the drawRect: method.
//this is an arbitrary size for example
CGSize aSize = CGSizeMake(50.f, 50.f);
//this can take any CGSize
//it works like the frame.size would in the drawRect: method
//in the way that it represents the context's size
UIGraphicsBeginImageContext(aSize);
//this gets the graphic context
CGContextRef context = UIGraphicsGetCurrentContext();
//you can do drawing just like you would in the drawRect: method
//I am drawing a square just for an example to show you that you can do any sort of drawing in here
CGContextMoveToPoint(context, 0.f, 0.f);
CGContextAddLineToPoint(context, aSize.width, 0.f);
CGContextAddLineToPoint(context, aSize.width, aSize.height);
CGContextAddLineToPoint(context, 0.f, aSize.height);
CGContextClosePath(context);
//you can stroke and/or fill
CGContextSetStrokeColorWithColor(context, [UIColor blueColor].CGColor);
CGContextSetFillColorWithColor(context, [UIColor lightGrayColor].CGColor);
CGContextDrawPath(context, kCGPathFillStroke);
//now get the image from the context
UIImage *squareImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageView *squareImageView = [[UIImageView alloc]initWithImage:squareImage];
Edit: One thing I should add is that for any modern day drawing of this kind, you should be swapping out
UIGraphicsBeginImageContext(size);
for
UIGraphicsBeginImageContextWithOptions(size, opaque, scale);
This will draw your graphics correctly for retina and non retina displays.
FYI, UIGraphicsBeginImageContext(size) is equivalent to UIGraphicsBeginImageContextWithOptions(size, FALSE, 1.f) which is fine for none retina displays that may have some transparency.
However, if you don't need transparency, it is more optimized to pass in TRUE for the opaque argument.
The safest and recommended way of drawing is to pass in [[UIScreen mainScreen]scale] as the scale argument.
So for the example(s) above, you would use this instead:
UIGraphicsBeginImageContextWithOptions(aSize, FALSE, [[UIScreen mainScreen] scale]);
For more info, check out Apple's docs.
You can draw it using either fill or stroke methods for example in custom view's drawInRect: implementation:
- (void)drawRect:(CGRect)rect {
// Drawing code
UIBezierPath *circle = [UIBezierPath
bezierPathWithOvalInRect:CGRectMake(75, 100, 200, 200)];
[circle fill];
}
You can also add UIBezierPath to UIView without subclassing by using a CAShapeLayer.
For example, to add your path as a 3pt white line centered in a UIView:
UIBezierPath *mybezierpath = [UIBezierPath
bezierPathWithOvalInRect:CGRectMake(0, 0, 100, 100)];
CAShapeLayer *lines = [CAShapeLayer layer];
lines.path = mybezierpath.CGPath;
lines.bounds = CGPathGetBoundingBox(lines.path);
lines.strokeColor = [UIColor whiteColor].CGColor;
lines.fillColor = [UIColor clearColor].CGColor; /*if you just want lines*/
lines.lineWidth = 3;
lines.position = CGPointMake(self.myview.frame.size.width/2.0, self.myview.frame.size.height/2.0);
lines.anchorPoint = CGPointMake(.5, .5);
[self.myview.layer addSublayer:lines];
Drawing is the exclusive provision of views. Make a custom view, give it your path, and implement the view's drawRect: method to fill and/or stroke the path.
In Swift 2.0:
let path = UIBezierPath()
let p1 = CGPointMake(0,self.view.frame.height/2)
let p3 = CGPointMake(self.view.frame.width,self.view.frame.height/2)
path.moveToPoint(p1)
path.addQuadCurveToPoint(p3, controlPoint: CGPoint(x: self.view.frame.width/2, y: 0))
let line = CAShapeLayer()
line.path = path.CGPath;
line.strokeColor = UIColor.blackColor().CGColor
line.fillColor = UIColor.redColor().CGColor
view.layer.addSublayer(line)

Image Context produces artifacts when compositing UIImages

I'm trying to overlay a custom semi-transparent image over a base image. The overlay image is stretchable and created like this:
[[UIImage imageNamed:#"overlay.png"] stretchableImageWithLeftCapWidth:5.0 topCapHeight:5.0]
Then I pass that off to a method that overlays it onto the background image for a button:
- (void)overlayImage:(UIImage *)overlay forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// clear context
CGContextClearRect(context, frame);
// draw images
[baseImage drawInRect:frame];
[overlay drawInRect:frame];// blendMode:kCGBlendModeNormal alpha:1.0];
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
The resulting overlaidImage looks mostly correct, it is the correct size, the alpha is blended correctly, etc. however it has vertical artifacts/noise.
UIImage artifacts example http://acaciatreesoftware.com/img/UIImage-artifacts.png
(example at http://acaciatreesoftware.com/img/UIImage-artifacts.png)
I tried clearing the context first and then turning off PNG compression--which reduces the artifacting some (completely on non stretched images I think).
Does anyone know a method for drawing stretchable UIImages with out this sort of artifacting happening?
So the answer is: Don't do this. Instead you can paint your overlay procedurally. Like so:
- (void)overlayWithColor:(UIColor *)overlayColor forState:(UIControlState)state {
UIImage *baseImage = [self backgroundImageForState:state];
CGRect frame = CGRectZero;
frame.size = baseImage.size;
// create a new image context
UIGraphicsBeginImageContext(baseImage.size);
// get context
CGContextRef context = UIGraphicsGetCurrentContext();
// draw background image
[baseImage drawInRect:frame];
// overlay color
CGContextSetFillColorWithColor(context, [overlayColor CGColor]);
CGContextSetBlendMode(context, kCGBlendModeSourceAtop);
CGContextFillRect(context, frame);
// get UIImage
UIImage *overlaidImage = UIGraphicsGetImageFromCurrentImageContext();
// clean up context
UIGraphicsEndImageContext();
[self setBackgroundImage:overlaidImage forState:state];
}
Are you being too miserly with your original image, and forcing it to stretch rather than shrink? I've found best results out of images that fit the same aspect ratio and were reduced in size. Might not solve your problem tho.

UITableViewCell's imageView fit to 40x40

I use the same big images in a tableView and detailView.
Need to make imageView filled in 40x40 when an imags is showed in tableView, but stretched on a half of a screen. I played with several properties but have no positive result:
[cell.imageView setBounds:CGRectMake(0, 0, 50, 50)];
[cell.imageView setClipsToBounds:NO];
[cell.imageView setFrame:CGRectMake(0, 0, 50, 50)];
[cell.imageView setContentMode:UIViewContentModeScaleAspectFill];
I am using SDK 3.0 with build in "Cell Objects in Predefined Styles".
I put Ben's code as an extension in my NS-Extensions file so that I can tell any image to make a thumbnail of itself, as in:
UIImage *bigImage = [UIImage imageNamed:#"yourImage.png"];
UIImage *thumb = [bigImage makeThumbnailOfSize:CGSizeMake(50,50)];
Here is .h file:
#interface UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size;
#end
and then in the NS-Extensions.m file:
#implementation UIImage (PhoenixMaster)
- (UIImage *) makeThumbnailOfSize:(CGSize)size
{
UIGraphicsBeginImageContextWithOptions(size, NO, UIScreen.mainScreen.scale);
// draw scaled image into thumbnail context
[self drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
return newThumbnail;
}
#end
I cache a thumbnail version since using large images scaled down on the fly uses too much memory.
Here's my thumbnail code:
- (UIImage *)thumbnailOfSize:(CGSize)size {
if( self.previewThumbnail )
return self.previewThumbnail; // returned cached thumbnail
UIGraphicsBeginImageContext(size);
// draw scaled image into thumbnail context
[self.preview drawInRect:CGRectMake(0, 0, size.width, size.height)];
UIImage *newThumbnail = UIGraphicsGetImageFromCurrentImageContext();
// pop the context
UIGraphicsEndImageContext();
if(newThumbnail == nil)
NSLog(#"could not scale image");
self.previewThumbnail = newThumbnail;
return self.previewThumbnail;
}
Just make sure you properly clear the cached thumbnail if you change your original image (self.preview in my case).
I have mine wrapped in a UIView and use this code:
imageView.contentMode = UIViewContentModeScaleAspectFit;
imageView.autoresizingMask = UIViewAutoresizingFlexibleWidth |UIViewAutoresizingFlexibleHeight;
[self addSubview:imageView];
imageView.frame = self.bounds;
(self is the wrapper UIView, with the dimensions I want - I use AsyncImageView).
I thought Ben Lachman's suggestion of generating thumbnails in advance rather than on the fly was smart, so I adapted his code so it could handle a whole array and to make it more portable (no hard-coded property names).
- (NSArray *)arrayOfThumbnailsOfSize:(CGSize)size fromArray:(NSArray*)original {
NSMutableArray *temp = [NSMutableArray arrayWithCapacity:[original count]];
for(UIImage *image in original){
UIGraphicsBeginImageContext(size);
[image drawInRect:CGRectMake(0,0,size.width,size.height)];
UIImage *thumb = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[temp addObject:thumb];
}
return [NSArray arrayWithArray:temp];
}
you might be able to use this?
yourTableViewController.rowImage = [UIImage imageNamed:#"yourImage.png"];
and/or
cell.image = yourTableViewController.rowImage;
and if your images are already 40x40 then you shouldn't have to worry about setting bounds and stuff... but, i'm also new to this, so, i wouldn't know, haven't played around with Table View row/cell images much
hope this helps.
I was able to make this work using interface builder and a tableviewcell. You can set the "Mode" properties for an image view to "Aspect Fit". I'm not sure how to do this programatically.
Try setting UIImageView.autoresizesSubviews and/or UIImageView.contentStretch.