I'm building an iPad app that includes several bar graphs (30+ bars per graph) and a pull-up menu from the bottom of the screen. The design calls for these bars to have rounded corners on the top left and top right corner, but square corners on the bottom left and bottom right.
Right now, I am rounding the top corners of each individual graph bar by using a mask layer:
#import "UIView+RoundedCorners.h"
#import <QuartzCore/QuartzCore.h>
#implementation UIView (RoundedCorners)
-(void)setRoundedCorners:(UIRectCorner)corners radius:(CGFloat)radius {
CGRect rect = self.bounds;
// Create the path
UIBezierPath *maskPath = [UIBezierPath bezierPathWithRoundedRect:rect
byRoundingCorners:corners
cornerRadii:CGSizeMake(radius, radius)];
// Create the shape layer and set its path
CAShapeLayer *maskLayer = [CAShapeLayer layer];
maskLayer.frame = rect;
maskLayer.path = maskPath.CGPath;
// Set the newly created shape layer as the mask for the view's layer
self.layer.mask = maskLayer;
self.layer.shouldRasterize = YES;
}
#end
This works, but there is serious frame loss / lag when I pull up the pull-up menu (Note that the menu overlaps, appearing on top of the graph). When I take away the rounded corners, the menu pulls up beautifully. From what I gather, I need a way to round the corners by directly modifying the view, not by using a mask layer. Does anyone know how I could pull this off or have another possible solution? Any help greatly appreciated
From what I gather, I need a way to round the corners by directly
modifying the view, not by using a mask layer. Does anyone know how I
could pull this off or have another possible solution?
The usual way is to get the view's layer and set the cornerRadius property:
myView.layer.cornerRadius = 10.0;
That way there's no need for a mask layer or a bezier path. In the context of your method, it'd be:
-(void)setRoundedCorners:(UIRectCorner)corners radius:(CGFloat)radius
{
self.layer.cornerRadius = radius;
}
Update: Sorry -- I missed this part:
The design calls for these bars to have rounded corners on the top
left and top right corner, but square corners on the bottom left and
bottom right
If you only want some of the corners rounded, you can't do that by setting the layer's cornerRadius property. Two approaches you could take:
Create a bezier path with two corners rounded and two square, and then fill it. No need for a mask layer to do that.
Use a resizable image. See UIImage's -resizableImageWithCapInsets: and -resizableImageWithCapInsets:resizingMode: methods. There are even some methods for animating reizable images, so you could easily have your bars grow to the right length.
Try setting shouldRasterize on your layers. Should help.
Related
I have two ImageViews, one called imageView and the other called subView (which is a subview of imageView).
I want to blend the images on these views together, with the user being able to switch the alpha of the blend with a pan. My code works, but right now, the code is slow as we are redrawing the image each time the pan gesture is moved. Is there a faster/more efficient way of doing this?
BONUS Q: I want to allow for my subView image to drawn zoomed in. Currently I've set my subView to be UIViewContentModeCenter, however I can't seem to draw a zoomed in part of my image with this content mode. Is there any way around this?
My drawrect:
- (void)drawRect:(CGRect)rect
{
float xCenter = self.center.x - self.currentImage1.size.width/2.0;
float yCenter = self.center.y - self.currentImage1.size.height/2.0;
subView.alpha = self.blendAmount; // Customize the opacity of the top image.
UIGraphicsBeginImageContext(self.currentImage1.size);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextSetBlendMode(c, kCGBlendModeColorBurn);
[imageView.layer renderInContext:c];
self.blendedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
[self.blendedImage drawAtPoint:CGPointMake(xCenter,yCenter)];
}
You need to use GPU for image processing which is far faster than using CPU (as you're doing right now).
You can use Core Image framework which is very fast and easy to use but requires iOS 5, or you can use Open GL directly but you need to be experienced and have some knowledge about Open GL Shading.
I have an Image inside UIScrollView which i can zoom in and out.
I have a button that let the user rotate the Image 90 degrees:
(void)RotateImage {
CGAffineTransform rotateTrans = CGAffineTransformMakeRotation(-90.0 / 180.0 * 3.14);
BaseImg.transform = rotateTrans;
}
After the imaged is rotated i cannot zoom in and out.. the image is going crazy on the screen and going back to the UNRotated state.
What am i doing wrong? code examples will be great!
Thanks :)
UIScrollView likes to take over the transforms of the views it contains. There are two solutions:
Rotate the image without changing the containing view.
Create a UIView subclass that displays an image within a sublayer.
To rotate the image, see How to Rotate a UIImage 90 degrees?. If you're always and only doing 90 degree rotation, see #Peter Sarnowski's solution. To adapt it to what you're doing here, assuming that BaseImg is a UIImageView:
- (void) rotateImage
{
UIImage *sourceImage = [baseImg image];
UIImage *rotatedImage = [UIImage imageWithCGImage:[sourceImage CGImage] scale:1.0 orientation:UIImageOrientationRight];
[baseImg setImage:rotatedImage];
}
This will only rotate once. To have rotateImage work repeatedly, read the existing orientation property and move it on to the next in clockwise or anticlockwise order.
If the image is not square, you may also need to resize baseImg to reflect its new aspect ratio.
To create a UIView subclass, you need to have it store a CALayer as a sublayer of the view layer. Store the image in the sublayer, and transform the sublayer at will. This is faster, and allows arbitrary rotation, but you need to calculate your own scaling to prevent the rotate image going outside the view bounds.
To simply rotate image the code is perfect but to add zoom in and out and then add rotation you need to retain its transforms.Here is a sample code that can help you.
https://github.com/elc/iCodeBlogDemoPhotoBoard
I can create a CALayer using [CALayer layer] and then have rounded corners using layer.cornerRadius = x.
After I do this, I have a rounded rectangle layer. Is it possible for me to extract this rounded rectangle outline as a path without re-creating the path myself?
If you just want the path then surely it's easy enough to just make one?
UIBezierPath *roundedRect = [UIBezierPath bezierPathWithRoundedRect:layer.bounds
cornerRadius:layer.cornerRadius];
If you need to use this in CoreGraphics then just ask for it's CGPath
roundedRect.CGPath;
I am working on one iPhone App wherein I need to make a portion of the image transparent by setting its alpha level to 0 as the user moves around his finger on the image. Basically if you happen to know the app store application iSteam, user should be able to move his finger around on a top image which will make the background image transparent.
Currently I am using two UIImageView. One that holds the background image and the other on top of it which holds a darker image. Now user should be able to draw random curves on this darker image which will make the part of the background image appear on top. I am not able to figure out how should go about making the top image transparent which the top most of two UIImageView holds.
Any idea on this? Also what should I use for this? Quartz or Open GL. I am a newbie to the iPhone App Dev and have absolutely no idea about these APIs so some guidance from the experts will surely help me getting ahead with iPhone SDK Development.
The UIImageView has a layer which you can refer to as its layer and talk to when you've linked your project to QuartzCore. As the user moves a finger, clip a clear shape in an opaque-color-filled graphics context the same size as the UIImageView, turn that into a CGImageRef, set that as a CALayer's contents (again this CALayer needs to be the same size as the UIImageView), and set that layer as the UIImageView's layer.mask. Wherever the mask is clear, that punches a transparent hole in the layer, which means the view, which means the image the UIImageView is showing. (If that doesn't work, because the UIImageView doesn't like your interfering with its layer, you can use a superview of the UIImageView instead.)
EDIT (next day) - Here's sample code for a layer's delegate that punches a circular hole in the center:
-(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)c {
CGRect r = CGContextGetClipBoundingBox(c);
CGRect r2 = CGRectInset(r, r.size.width/2.0 - 10, r.size.height/2.0 - 10);
UIImage* maskim;
{
UIGraphicsBeginImageContextWithOptions(r.size, NO, 0);
CGContextRef c = UIGraphicsGetCurrentContext();
CGContextAddEllipseInRect(c, r2);
CGContextAddRect(c, r);
CGContextEOClip(c);
CGContextSetFillColorWithColor(c, [UIColor blackColor].CGColor);
CGContextFillRect(c, r);
maskim = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
CALayer* mask = [CALayer layer];
mask.frame = r;
mask.contents = (id)maskim.CGImage;
layer.mask = mask;
}
So, if that layer is a view's layer, and if the UIImageView is that view's subview, a hole is punched in the UIImageView.
Here's a screen shot of the result:
I'm working on making an iPhone App where there are two ImageViews and when you touch the top one, wherever you tapped, the bottom one shows instead.
Basically what I want to do is cut an ellipse/roundedrect out of an image. To do this I was thinking on either clipping the image, or changing the alpha pixels in the rect to zero. I am new to Quartz 2D Programming so I am not sure how to do this.
Assuming I have:
UIImageView *topImage;
UIImageView *bottomImage;
How do I delete a CGRect/Ellipse/RoundedRect from these images.
This is kind of like those lottery tickets that you have to scratch off to reveal if you won.
I would generally try to make a mask from a path (here containing a rounded rectangle), then masking the image with it, as demonstrated in the apple docs. The one of the benefits of this is that for hit testing all you need to do is CGPathContainsPoint with the point that was touched (as in it will test whether it was in the visible area of the image).
I tried this code:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGRect frame = CGRectMake(100, 100, 100, 100);
CGPathRef roundedRectPath = [self newPathForRoundedRect:frame radius:5];
CGContextAddPath(ctx, roundedRectPath);
CGContextClip (ctx);
CGPathRelease(roundedRectPath);
(Together with the rounded rect path function you sent)
This is on a white view and beneath the view there is a gray Window, so I thought this would just show gray instead of white in CGRect frame but it didn't do anything...