Draw rounded linear gradient (or extended radial gradient) with CoreGraphics - objective-c

I want to do some custom drawing with CoreGraphics. I need a linear gradient on my view, but the thing is that this view is a rounded rectangle so I want my gradient to be also rounded at angles. You can see what I want to achieve on the image below:
So is this possible to implement in CoreGraphics or some other programmatic and easy way?
Thank you.

I don't think there is an API for that, but you can get the same effect if you first draw a radial gradient, say, in an (N+1)x(N+1) size bitmap context, then convert the image from the context to a resizable image with left and right caps set to N.
Pseudocode:
UIGraphicsBeginImageContextWithOptions(CGSizeMake(N+1,N+1), NO, 0.0f);
CGContextRef context = UIGraphicsGetCurrentContext();
// <draw the gradient into 'context'>
UIImage* gradientBase = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage* gradientImage = [gradientBase resizableImageWithCapInsets:UIEdgeInsetsMake(0,N,0,N)];
In case you want the image to scale vertically as well, you just have to set the caps to UIEdgeInsetsMake(N,N,N,N).

I just want to add more sample code for this technique, as some things weren't obvious for. Maybe it will be useful for somebody:
So, let's say, we have our custom view class and in it's drawRect: method we put this:
// Defining the rect in which to draw
CGRect drawRect=self.bounds;
Float32 gradientSize=drawRect.size.height; // The size of original radial gradient
CGPoint center=CGPointMake(0.5f*gradientSize,0.5f*gradientSize); // Center of gradient
// Creating the gradient
Float32 colors[4]={0.f,1.f,1.f,0.2f}; // From opaque white to transparent black
CGGradientRef gradient=CGGradientCreateWithColorComponents(CGColorSpaceCreateDeviceGray(), colors, nil, 2);
// Starting image and drawing gradient into it
UIGraphicsBeginImageContextWithOptions(CGSizeMake(gradientSize, gradientSize), NO, 1.f);
CGContextRef context=UIGraphicsGetCurrentContext();
CGContextDrawRadialGradient(context, gradient, center, 0.f, center, center.x, 0); // Drawing gradient
UIImage* gradientImage=UIGraphicsGetImageFromCurrentImageContext(); // Retrieving image from context
UIGraphicsEndImageContext(); // Ending process
gradientImage=[gradientImage resizableImageWithCapInsets:UIEdgeInsetsMake(0.f, center.x-1.f, 0.f, center.x-1.f)]; // Leaving 2 pixels wide area in center which will be tiled to fill whole area
// Drawing image into view frame
[gradientImage drawInRect:drawRect];
That's all. Also if you're not going to ever change the gradient while app is running, you would want to put everything except last line in awakeFromNib method and then in drawRect: just draw the gradientImage into view's frame. Also don't forget to retain the gradientImage in this case.

Related

coordinate computation of the image thumbnail

This is a code snippet for creating a thumbnail sized image (from an original large image) and placing it appropriately on top of a tableviewcell. As i was studying the code i got stuck at the part where the thumbnail is being given a position by setting its abscissa and ordinate. In the method -(void)setThumbDataFromImage:(UIImage *)image they're setting the dimensions and coordinate for project thumbnail—
-(void)setThumbnailDataFromImage:(UIImage *)image{
CGSize origImageSize= [image size];
// the rectange of the thumbnail
CGRect newRect= CGRectMake(0, 0, 40, 40);
// figure out a scaling ratio to make sure we maintain the same aspect ratio
float ratio= MAX(newRect.size.width/origImageSize.width, newRect.size.height/origImageSize.height);
// Create a transparent bitmap context with a scaling factor equal to that of the screen
UIGraphicsBeginImageContextWithOptions(newRect.size, NO, 0.0);
// create a path that is a rounded rectangle
UIBezierPath *path= [UIBezierPath bezierPathWithRoundedRect:newRect cornerRadius:5.0];
// make all the subsequent drawing to clip to this rounded rectangle
[path addClip];
// center the image in the thumbnail rectangle
CGRect projectRect;
projectRect.size.width=ratio * origImageSize.width;
projectRect.size.height= ratio * origImageSize.height;
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
// draw the image on it
[image drawInRect:projectRect];
// get the image from the image context, keep it as our thumbnail
UIImage *smallImage= UIGraphicsGetImageFromCurrentImageContext();
[self setThumbnail:smallImage];
// get the PNG representation of the image and set it as our archivable data
NSData *data= UIImagePNGRepresentation(smallImage);
[self setThumbnailData:data];
// Cleanup image context resources, we're done
UIGraphicsEndImageContext();
}
I got the width and height computation wherein we multiply the origImageSize with scaling factor/ratio.
But then we use the following to give the thumbnail a position—
projectRect.origin.x= (newRect.size.width- projectRect.size.width)/2;
projectRect.origin.y= (newRect.size.height- projectRect.size.height)/2;
This i fail to understand. I cannot wrap my head around it. :?
Is this part of the centering process. I mean, are we using a mathematical relation here to position the thumbnail or is it some random calculation i.e could have been anything.. Am i missing some fundamental behind these two lines of code??
Those two lines are standard code for centering something, although they aren’t quite written in the most general way. You normally want to use:
projectRect.origin.x = newRect.origin.x + newRect.size.width / 2.0 - projectRect.size.width / 2.0;
projectRect.origin.y = newRect.origin.y + newRect.size.height / 2.0 - projectRect.size.height / 2.0;
In your case the author knows the origin is 0,0, so they omitted the first term in each line.
Since to center a rectangle in another rectangle you want the centers of the two axes to line up, you take, say, half the container’s width (the center of the outer rectangle) and subtract half the inner rectangle’s width (which takes you to the left side of the inner rectangle), and that gives you where the inner rectangle’s left side should be (e.g.: its x origin) when it is correctly centered.

Optimize CGContextDrawRadialGradient in drawRect:

In my iPad app, I have a UITableView that alloc/inits a UIView subclass every time a new cell is selected. I've overridden drawRect: in this UIView to draw a radial gradient and it works fine, but performance is suffering - when a cell is tapped, the UIView takes substantially longer to draw a gradient programmatically as opposed to using a .png for the background. Is there any way to "cache" my drawRect: method or the gradient it generates to improve performance? I'd rather use drawRect: instead of a .png. My method looks like this:
- (void)drawRect:(CGRect)rect
{
CGContextRef context = UIGraphicsGetCurrentContext();
size_t gradLocationsNum = 2;
CGFloat gradLocations[2] = {0.0f, 1.0f};
CGFloat gradColors[8] = {0.0f,0.0f,0.0f,0.0f,0.0f,0.0f,0.0f,0.5f};
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGGradientRef gradient = CGGradientCreateWithColorComponents(colorSpace, gradColors, gradLocations, gradLocationsNum);
CGColorSpaceRelease(colorSpace);
CGPoint gradCenter = CGPointMake(CGRectGetMidX(self.bounds), CGRectGetMidY(self.bounds));
float gradRadius = MIN(self.bounds.size.width , self.bounds.size.height) ;
CGContextDrawRadialGradient (context, gradient, gradCenter, 0, gradCenter, gradRadius, kCGGradientDrawsAfterEndLocation);
CGGradientRelease(gradient);
}
Thanks!
You can render graphics into a context and then store that as a UIImage. This answer should get you started:
drawRect: is a method on UIView used to draw the view itself, not to pre-create graphic objects.
Since it seems that you want to create shapes to store them and draw later, it appears reasonable to create the shapes as UIImage and draw them using UIImageView. UIImage can be stored directly in an NSArray.
To create the images, do the following (on the main queue; not in drawRect:):
1) create a bitmap context
UIGraphicsBeginImageContextWithOptions(size, opaque, scale);
2) get the context
CGContextRef context = UIGraphicsGetCurrentContext();
3) draw whatever you need
4) export the context into an image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
5) destroy the context
UIGraphicsEndImageContext();
6) store the reference to the image
[yourArray addObject:image];
Repeat for each shape you want to create.
For details see the documentation for the above mentioned functions. To get a better understanding of the difference between drawing in drawRect: and in arbitrary place in your program and of working with contexts in general, I would recommend you read the Quartz2D Programming Guide, especially the section on Graphics Contexts.

Blending two images and drawing resized image from two UIImageViews

I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.

iOS draw gradient in part of the view

I've created a custom progress bar which subclass UIView and implements drawRect. I manage to draw a single gradient on the entire view. I'd like however to draw several different gradients, each one in a different position. How to I limit CGContextDrawLinearGradient to smaller rect inside my view?
glossGradient = CGGradientCreateWithColorComponents(rgbColorspace, components, locations, num_locations);
CGPoint topCenter = CGPointMake(start + (CGRectGetMidX(currentBounds)/currentBounds.size.width), 0.0f);`
CGPoint midCenter = CGPointMake(start + (CGRectGetMidX(currentBounds)/currentBounds.size.width), currentBounds.size.height);
CGContextDrawLinearGradient(currentContext, glossGradient, topCenter, midCenter, 0);
start = start + (values[i] / currentBounds.size.width);
CGGradientRelease(glossGradient);
}
You can use CGContectClipToRect to restrict the drawing area
Then for each gradient do:
CGContextSaveGState(currentContext);
CGContextClipToRect(theRect); // theRect should be the area where you want to draw the gradient
... // gradient drawing code
CGContextRestoreGState(currentContext);
As stated in Quartz 2D Programming Guide:
When you paint a gradient, Quartz fills the current context. Painting
a gradient is different from working with colors and patterns, which
are used to stroke and fill path objects. As a result, if you want
your gradient to appear in a particular shape, you need to clip the
context accordingly.
Since you want to draw each gradient in a rectangle, you will want to do something like this for each gradient and rectangle:
CGContextSaveGState(currentContext); {
CGContextClipToRect(currentContext, currentBounds);
CGContextDrawLinearGradient(currentContext, glossGradient, topCenter, midCenter, 0);
} CGContextRestoreGState(currentContext);

How do I clip or change alpha of an image (pixels) in Quartz?

I'm working on making an iPhone App where there are two ImageViews and when you touch the top one, wherever you tapped, the bottom one shows instead.
Basically what I want to do is cut an ellipse/roundedrect out of an image. To do this I was thinking on either clipping the image, or changing the alpha pixels in the rect to zero. I am new to Quartz 2D Programming so I am not sure how to do this.
Assuming I have:
UIImageView *topImage;
UIImageView *bottomImage;
How do I delete a CGRect/Ellipse/RoundedRect from these images.
This is kind of like those lottery tickets that you have to scratch off to reveal if you won.
I would generally try to make a mask from a path (here containing a rounded rectangle), then masking the image with it, as demonstrated in the apple docs. The one of the benefits of this is that for hit testing all you need to do is CGPathContainsPoint with the point that was touched (as in it will test whether it was in the visible area of the image).
I tried this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect frame = CGRectMake(100, 100, 100, 100);
CGPathRef roundedRectPath = [self newPathForRoundedRect:frame radius:5];
CGContextAddPath(ctx, roundedRectPath);
CGContextClip (ctx);
CGPathRelease(roundedRectPath);
(Together with the rounded rect path function you sent)
This is on a white view and beneath the view there is a gray Window, so I thought this would just show gray instead of white in CGRect frame but it didn't do anything...