CALayer cornerRadius has no effect on 4" device (simulator) - objective-c

I work on dynamically, programmatically layout a view for 3.5" devices as well as for 4" devices.
As such that works fine.
But I want rounded corners so that my images appear like playing cards.
And I get rounded corners nicely displayed in 3,5 inch devices on the simulator for simulated iOS 6.1 and 7 alike.
But when I choose iPhone retina 4 inch on 6.1 or 7, then the UIImage in the UIImageView is fully displayed.
It works nicely on simulated iPad devices (in iPhone simulation mode - it is an iPhone only app).
As for today, I do not have any 4" device with me to test it. I can test on a device during the upcoming week.
Hiere is the relevant code:
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
self.imageV.image = self.image; // The image property was set by the caller.
// Layout imageV within self.view with a margin of MARGIN
self.imageV.frame = CGRectMake(self.view.frame.origin.x + MARGIN, self.view.frame.origin.y + MARGIN, self.view.frame.size.width - 2 * MARGIN, self.view.frame.size.height - 2 * MARGIN);
// set the raidus and the mask to follow the rounded corners.
self.imageV.layer.cornerRadius = CORNER_RADIUS;
self.imageV.layer.masksToBounds = YES;
}
BTW: CORNER_RADIUS is 18 and MARGIN is 15. Changing these values has no effect on the issue.
UPDATE: Thanks to Matt I figured out that the problem disappears when I create the UIImageView programmatically. That is some really nice workaround plus it points into the right diretion, I guess, but it is not a solution. Any ideas what setting in the storyboard editor might have caused the problem?
As far as I can see, auto layout is disabled for all view controllers in this storyboard.

The answer is simple. The code did work. It did add round corners to the UIImageView object and the maskToBounds worked well.
But the actual image displayed is smaller. I used AspectFit as mode to ensure that the actual image is not squeesed but displayed in its original aspect ration. Because of the longer layout of the iPhone5 dimensions the image only filled a part of its owning UIImageView. I changed the background color to gray for the screenshot and now it gets clear.
So the solution will be that I'll have to calculate the proper size of the image view so that it matches exactly the size of the scaled image. Then it should work.
(I'll update this answer when it is done).
Update: this is what I finally did: I removed the UIImageView from the Storyboard and deal with it programmatically.
Don't get confused by the complexity. I added another view just to throw a shadow, although this is not related to the original question. The shadow I wanted to add anyway. And it turned out that CALayer's shadow and masksToBounds=YES don't really agree on. That is why I added a regular UIView which lies in between the card view and the background view.
Finally this is so much of a hassle for displaying a simple rectangle image, that I think, just subclassing UIView and drawing everything with openGL or so directly into the CALayer would be probably much easier. :-)
Anyway, this is my code:
- (void)viewDidLoad {
[super viewDidLoad];
self.state = #0;
// Create an image view to carry the image with round rects
// and create a regular view to create the shadow.
// Add the shadow view first so that it appears behind
// the actual image view.
// Explanation: We need a separate view for the shadow with the same
// dimenstions as the imageView. This is because the imageView's image
// is rectangular and will only be clipped to round rects when the
// property masksToBounds is set to YES. But this setting will also
// clip away any shadow that the imageView's layer may have.
// Therfore we add a separate mainly empty UIView just behind the
// UIImageview to throw the shadow.
self.shadowV = [[UIView alloc] init];
self.imageV = [[UIImageView alloc] initWithImage:self.image];
[self.view addSubview:self.shadowV];
[self.shadowV addSubview:self.imageV];
// set the raidus and the mask to follow the rounded corners.
[self.imageV.layer setCornerRadius:CORNER_RADIUS];
[self.imageV.layer setMasksToBounds:YES];
[self.imageV setContentMode:UIViewContentModeScaleAspectFit];
// set the shadows properties
[self.shadowV.layer setShadowColor:[UIColor blackColor].CGColor];
[self.shadowV.layer setShadowOpacity:0.4];
[self.shadowV.layer setShadowRadius:3.0];
[self.shadowV.layer setShadowOffset:CGSizeMake(SHADOW_OFFSET, SHADOW_OFFSET)];
[self.shadowV.layer setCornerRadius:CORNER_RADIUS];
[self.shadowV setBackgroundColor:[UIColor whiteColor]]; // The view needs to have some content. Otherwise it is not displayed at all, not even its shadow.
}
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
// Just to be save
if (!self.image) {
return;
}
self.imageV.image = self.image; // The image property was set by the caller.
// Layout imageV within self.view with a margin of MARGIN
self.imageV.frame = CGRectMake(MARGIN, MARGIN, self.view.bounds.size.width - 2 * MARGIN, self.view.bounds.size.height - 2 * MARGIN);
// Calculate the size and position of the image and set the image view to
// the same dimensions
// This works under the assumption, that the image content mode is aspectFit.
// Well, as we are doing so much of the layout manually, it would work with a number of content modes. :-)
float imageWidth, imageHeight;
float heightWidthRatioImageView = self.view.frame.size.height / self.view.frame.size.width;
float heightWidthRatioImage = self.image.size.height / self.image.size.width;
if (heightWidthRatioImageView > heightWidthRatioImage) {
// The ImageView is "higher" than the image itself.
// --> The image width is set to the imageView width and its height is scaled accordingly.
imageWidth = self.imageV.frame.size.width;
imageHeight = imageWidth * heightWidthRatioImage;
} else {
// The ImageView is "wider" than the image itself.
// --> The image height is set to the imageView height and its width is scaled accordingly.
imageHeight = self.imageV.frame.size.height;
imageWidth = imageHeight / heightWidthRatioImage;
}
// Layout imageView and ShadowView accordingly.
CGRect imageRect =CGRectMake((self.view.bounds.size.width - imageWidth) / 2,
(self.view.bounds.size.height - imageHeight) / 2,
imageWidth, imageHeight);
[self.shadowV setFrame:imageRect];
[self.imageV setFrame:CGRectMake(0.0f, 0.0f, imageWidth, imageHeight)]; // Origin is (0,0) because it overlaps its superview which just throws the shadow.
}
And this is how it finally looks like:

The problem is due to some issue with code or configuration you have not told us about. Proof: I ran the following and it works fine. Note that I create the image view in code (to avoid the auto layout problem) and fixed your frame/bounds confusion, and that I've skipped your self.image, but none of that is really relevant to the issue you are seeing:
#define CORNER_RADIUS 18
#define MARGIN 15
- (void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
self.imageV = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"im"]];
[self.view addSubview:self.imageV];
// Layout imageV within self.view with a margin of MARGIN
self.imageV.frame = CGRectMake(self.view.bounds.origin.x + MARGIN, self.view.bounds.origin.y + MARGIN, self.view.bounds.size.width - 2 * MARGIN, self.view.bounds.size.height - 2 * MARGIN);
// set the raidus and the mask to follow the rounded corners.
self.imageV.layer.cornerRadius = CORNER_RADIUS;
self.imageV.layer.masksToBounds = YES;
}
It works fine (and you can prove that to yourself). Here is a screen shot of the 4-inch simulator:
Therefore the problem is outside the code that you quote in your question, and cannot be analyzed without further information.

Related

UIVIew from XIB with Autolayout to UItableView Header

I am writing because I have a problem with the Auto Layout.
I'm trying to create a simple view in InterfaceBuilder with Auto Layout I want to load code and enter as a header of a table (not as header section). I explain briefly what are the characteristics.
The imageView must be square and must be as wide as the screen.
The space under the picture to the bottom of view that contains the button and label must be high 50 points.
Between image and button has to be a fixed distance of 12 points.
Between image and label must be a fixed distance of 13 points.
All these features are able to get them with Auto Layout. I added a constraint to the aspect ratio of the image (1: 1) and the various constraints for distances. all right.
The real problem is that by launching the app on iphone 6+ simulator (414 points of width), the image (with the label and button) goes above the cells.
Enabling various transparencies I noticed that the superView of Image View, only increase the width. It does not increase its height! How do I fix?
This is the code:
- (void)viewDidLoad{
//...
PhotoDetailsHeaderView *hView = (PhotoDetailsHeaderView *)[[[NSBundle mainBundle] loadNibNamed:#"PhotoDetailsHeaderView" owner:self options:nil] objectAtIndex:0];
hView.delegate = self;
self.tableView.tableHeaderView = hView;
//...
}
This is how I create the xib:
and this is how it is on the simulator, the green box is Uiimageview and the yellow box (under green box) is the mainview (or superview):
How can fix it?
Many thanks to all!
You'll need to add a property to store your PhotoDetailsHeaderView:
#property (nonatomic, strong) PhotoDetailsHeaderView *headerView;
Then calculate its expected frame in viewDidLayoutSubviews. If it needs updating, update its frame and re-set the tableHeaderView property. This last step will force the tableView to adapt to the header's updated frame.
- (void)viewDidLayoutSubviews{
[super viewDidLayoutSubviews];
CGRect expectedFrame = CGRectMake(0.0,0.0,self.tableview.size.width,self.tableView.size.width + 50.0);
if (!CGRectEqualToRect(self.headerView.frame, expectedFrame)) {
self.headerView.frame = expectedFrame;
self.tableView.tableHeaderView = self.headerView;
}
}
The problem is probably that in iOS you have to reset the header of the table view manually (if it has changed its size). Try something along these lines:
CGRect newFrame = imageView.frame;
imageView.size.height = imageView.size.width;
imageView.frame = newFrame;
[self.tableView setTableHeaderView:imageView];
This code should be in -(void)viewDidLayoutSubviews method of your view controller.

NSScrollView: fade in a top-border like Messages.app

What I Want to Do:
In Messages.app on OS 10.10, when you scroll the left-most pane (the list of conversations) upwards, a nice horizontal line fades in over about 0.5 seconds. When you scroll back down, the line fades back out.
What I Have:
I am trying to achieve this effect in my own app and I've gotten very close. I subclassed NSScrollView and have done the following:
- (void) awakeFromNib
{
_topBorderLayer = [[CALayer alloc] init];
CGColorRef bgColor = CGColorCreateGenericGray(0.8, 1.0f);
_topBorderLayer.backgroundColor = bgColor;
CGColorRelease(bgColor);
_topBorderLayer.frame = CGRectMake(0.0f, 0.0f, self.bounds.size.width, 1.0f);
_topBorderLayer.autoresizingMask = kCALayerWidthSizable;
_topBorderLayer.zPosition = 1000000000;
_fadeInAnimation = [[CABasicAnimation animationWithKeyPath:#"opacity"] retain];
_fadeInAnimation.duration = 0.6f;
_fadeInAnimation.fromValue = #0;
_fadeInAnimation.toValue = #1;
_fadeInAnimation.removedOnCompletion = YES;
_fadeInAnimation.fillMode = kCAFillModeBoth;
[self.layer insertSublayer:_topBorderLayer atIndex:0];
}
- (void) layoutSublayersOfLayer:(CALayer *)layer
{
NSPoint origin = [self.contentView documentVisibleRect].origin;
// 10 is a fudge factor for blank space above first row's actual content
if (origin.y > 10)
{
if (!_topBorderIsShowing)
{
_topBorderIsShowing = YES;
[_topBorderLayer addAnimation:_fadeInAnimation forKey:nil];
_topBorderLayer.opacity = 1.0f;
}
}
else
{
if (!_topBorderIsShowing)
{
_topBorderIsShowing = NO;
// Fade out animation here; omitted for brevity
}
}
}
The Problem
The "border" sublayer that I add is not drawing over top of all other content in the ScrollView, so that we end up with this:
The frames around the image, textfield and checkbox in this row of my outlineView are "overdrawing" my border layer.
What Causes This
I THINK this is because the scrollView is contained inside an NSVisualEffectView that has Vibrancy enabled. The reason I think this is that if I change the color of my "border" sublayer to 100% black, this issue disappears. Likewise, if I turn on "Reduce Transparency" in OS X's System Preferences > Accessibility, the issue disappears.
I think the Vibrancy compositing is taking my grey border sublayer and the layers that represent each of those components in the outlineView row and mucking up the colors.
So... how do I stop that for a single layer? I've tried all sorts of things to overcome this. I feel like I'm 99% of the way to a solid implementation, but can't fix this last issue. Can anyone help?
NB:
I am aware that it's dangerous to muck directly with layers in a layer-backed environment. Apple's docs make it clear that we can't change certain properties of a view's layer if we're using layer-backing. However: adding and removing sublayers (as I am) is not a prohibited action.
Update:
This answer, while it works, causes problems if you're using AutoLayout. You'll start to get warnings that the scrollView still needs update after calling Layout because something dirtied the layout in the middle of updating. I have not been able to find a workaround for that, yet.
Original solution:
Easiest way to fix the problem is just to inset the contentView by the height of the border sublayer with this:
- (void) tile
{
id contentView = [self contentView];
[super tile];
[contentView setFrame:NSInsetRect([contentView frame], 0.0, 1.0)];
}
Should have thought of it hours ago. Works great. I'll leave the question for anyone who might be looking to implement these nice fading-borders.

iOS - Math help - base image zooms with pinch gesture need overlaid images adjust X/Y coords relative

I have an iPad application that has a base image UIImageView (in this case a large building or site plan or diagram) and then multiple 'pins' can be added on top of the plan (visually similar to Google Maps). These pins are also UIImageViews and are added to the main view on tap gestures. The base image is also added to the main view on viewDidLoad.
I have the base image working with the pinch gesture for zooming but obviously when you zoom the base image all the pins stay in the same x and y coordinates of the main view and loose there relative positioning on the base image (whose x,y and width,height coordinates have changed).
So far i have this...
- (IBAction)planZoom:(UIPinchGestureRecognizer *) recognizer;
{
recognizer.view.transform = CGAffineTransformScale(recognizer.view.transform, recognizer.scale, recognizer.scale);
recognizer.scale = 1;
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGRect pinFrame = pin.frame;
// ****************************************
// code to reposition the pins goes here...
// ****************************************
pin.frame = pinFrame;
}
}
}
I need help to calculate the math to reposition the pins x/y coordinates to retain there relative position on the zoomed in or out plan/diagram. The pins obviously do not want to be scaled/zoomed at all in terms of their width or height - they just need new x and y coordinates that are relative to there initial positions on the plan.
I have tried to work out the math myself but have struggled to work it through and unfortunately am not yet acquainted with the SDK enough to know if there is provision available built in to help or not.
Help with this math related problem would be really appreciated! :)
Many thanks,
Michael.
InNeedOfMathTuition.com
First, you might try embedding your UIImageView in a UIScrollView so zooming is largely accomplished for you. You can then set the max and min scale easily, and you can scroll around the zoomed image as desired (especially if your pins are subviews of the UIImageView or something else inside the UIScrollView).
As for scaling the locations of the pins, I think it would work to store the original x and y coordinates of each pin (i.e. when the view first loads, when they are first positioned, at scale 1.0). Then when the view is zoomed, set x = (originalX * zoomScale) and y = (originalY * zoomScale).
I had the same problem in an iOS app a couple of years ago, and if I recall correctly, that's how I accomplished it.
EDIT: Below is more detail about how I accomplished this (I'm looking my old code now).
I had a UIScrollView as a subview of my main view, and my UIImageView as a subview of that. My buttons were added to the scroll view, and I kept their original locations (at zoom 1.0) stored for reference.
In -(void)scrollViewDidScroll:(UIScrollView *)scrollView method:
for (id element in myButtons)
{
UIButton *theButton = (UIButton *)element;
CGPoint originalPoint = //get original location however you want
[theButton setFrame:CGRectMake(
(originalPoint.x - theButton.frame.size.width / 2) * scrollView.zoomScale,
(originalPoint.y - theButton.frame.size.height / 2) * scrollView.zoomScale,
theButton.frame.size.width, theButton.frame.size.height)];
}
For the -(UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView method, I returned my UIImageView. My buttons scaled in size, but I didn't include that in the code above. If you're finding that the pins are scaling in size automatically, you might have to store their original sizes as well as original coordinates and use that in the setFrame call.
UPDATE...
Thanks to 'Mr. Jefferson' help in his answer above, albeit with a differing implementation, I was able to work this one through as follows...
I have a scrollView which has a plan/diagram image as a subview. The scrollView is setup for zooming/panning etc, this includes adding UIScrollViewDelegate to the ViewController.
On user double tapping on the plan/diagram a pin image is added as a subview to the scrollView at the touch point. The pin image is a custom 'ZonePin' class which inherits from UIImageView and has a couple of additional properties including 'baseX' and 'baseY'.
The code for adding the pins...
- (IBAction)planDoubleTap:(UITapGestureRecognizer *) recognizer;
{
UIImage *image = [UIImage imageNamed:#"Pin.png"];
ZonePin *newPin = [[ZonePin alloc] initWithImage:image];
CGPoint touchPoint = [recognizer locationInView:planContainer];
CGFloat placementX = touchPoint.x - (image.size.width / 2);
CGFloat placementY = touchPoint.y - image.size.height;
newPin.frame = CGRectMake(placementX, placementY, image.size.width, image.size.height);
newPin.zoneRef = [NSString stringWithFormat:#"%#%d", #"BF", pinSeq++];
newPin.baseX = placementX;
newPin.baseY = placementY;
[planContainer addSubview:newPin];
}
I then have two functions for handling the scrollView interaction and this handles the scaling/repositioning of the pins relative to the plan image. These methods are as follows...
- (UIView *)viewForZoomingInScrollView:(UIScrollView *)scrollView
{
return planImage;
}
- (void)scrollViewDidScroll:(UIScrollView *)scrollView
{
for (ZonePin *pin in planContainer.subviews) {
if ([pin isKindOfClass:[ZonePin class]]){
CGFloat newX, newY;
newX = (pin.baseX * scrollView.zoomScale) + (((pin.frame.size.width * scrollView.zoomScale) - pin.frame.size.width) / 2);
newY = (pin.baseY * scrollView.zoomScale) + ((pin.frame.size.height * scrollView.zoomScale) - pin.frame.size.height);
CGRect pinFrame = pin.frame;
pinFrame.origin.x = newX;
pinFrame.origin.y = newY;
pin.frame = pinFrame;
}
}
}
For reference, the calculations for position the pins, by the nature of them being pins' centres the pin image on the x axis but has the y-axis bottom aligned.
The only thing left for me to do with this is to reverse the calculations used in the scrollViewDidScroll method when I add pins when zoomed in. The code for adding pins above will only work properly when the scrollView.zoomScale is 1.0.
Other than that, it now works great! :)

iOS: Orientation get future self.view.bounds

Background: I wanted to animate the change in my content along with the orientation. My content position is relative to the self.view.bounds.
Problem: In order to animate the content along with the bounds, I would need to know what would the bounds of the view be at the end. I can hard code it but I hope to find a more dynamic way.
Codes: So all animation takes place in willRotateToInterfaceOrientation as per following:
- (void)willRotateToInterfaceOrientation:(UIInterfaceOrientation)toInterfaceOrientation duration:(NSTimeInterval)duration {
//centering the image
imageView.frame = CGRectMake(self.view.bounds.origin.x + (self.view.bounds.size.width - image.size.width)/2 , self.view.bounds.origin.y + (self.view.bounds.size.height - image.size.height)/2, image.size.width, image.size.height);
NSLog(#"%d",[[UIDevice currentDevice] orientation]);
NSLog(#"%f", self.view.bounds.origin.x);
NSLog(#"%f", self.view.bounds.origin.y);
NSLog(#"%f", self.view.bounds.size.width);
NSLog(#"%f", self.view.bounds.size.height);
}
The bounds are before the orientation change. Thus, this only works if the transformation is 180 degrees. If I were to use didRotateFromInterfaceOrientation, the animation will be after the orientation change which looks awful.
Use willAnimateRotationToInterfaceOrientation:duration: instead. This method gets called from within the rotation animation block, and all the bounds have been set correctly at this point. Docs.
If you want it dynamic then when you initialize imageView, set the autoresizingMask property so that when the imageView's superview resizes on the rotate the margins can auto resize themselves...
imageView = //init imageView
imageView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleBottomMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin;
//addtional config
This means you only need to set the frame once then the imageView will always adjust itself to stay in the middle.
Check out the UIView class reference http://developer.apple.com/library/ios/ipad/#documentation/uikit/reference/uiview_class/uiview/uiview.html to see what else you can do with the autoresizingMask property
If I'm understanding you correctly, you just need to swap the X/Y coordinates and width/height in you're CGRectMake, to get your future layout for a 90 degree change. If it's 180 degree change, then it's the same as current
CGRectMake(self.view.bounds.origin.y + (self.view.bounds.size.height - image.size.height)/2 , self.view.bounds.origin.x + (self.view.bounds.size.width - image.size.width)/2, image.size.height, image.size.width);

Proper contentSize of UIScrollView with UIImageView inside it

I'm placing a UIImageView inside of a UIScrollView, basing my code off of the answer in this question. The problem I'm having is that there is a significant amount of white space to the bottom and right, and I can't scroll to some of the image in the top and left. I figure this is due to me incorrectly setting the contentSize of the scrollView. Here's the relevant code:
- (void)viewDidLoad
{
[super viewDidLoad];
_imageView.image = _image;
_imageView.bounds = CGRectMake(0, 0, _imageView.image.size.width,_imageView.image.size.height);
_scroller.contentSize = _imageView.image.size;
}
The view controller I'm in has three properties, a UIScrollView (_scroller), a UIImageView (_imageView), and a UIImage (_image).
You're setting the UIImageView's bounds property. You want to be setting its frame property instead. Setting the bounds will resize it around its center point (assuming you haven't changed the underlying CALayer's anchorPoint property), which is causing the frame origin to end up negative, which is why you can't see the upper-left.
_imageView.frame = CGRectMake(0, 0, _imageView.image.size.width, _imageView.image.size.height);
Alternate syntax:
_imageView.frame = (CGRect){CGPointZero, _imageView.image.size};