UIImageView autoresizingmask not working in certain cases - cocoa-touch

I am experimenting with a block-breaking iOS app to learn more about UI features. Currently, I am having issues trying to make it work for screen rotation.
I am able to get the blocks to re-arrange properly after screen rotation but am having trouble with getting the UIImageView for the paddle re-arrange.
My code is split as follows, VC calls initializes an object of the BlockModel class. This object stores a CGRect property (which is the CGRect corresponding to the paddle's ImageView).
The VC then creates an imageView initialized with the paddle image, sets the autoresinging property on the image view (to have flexible external masks), sets the frame based on the CGRect in the model object and adds the imageView as a sub-view of the main view being handled by the VC.
The code is below.
When I rotate, I am seeing that the ImageView is not being automatically repositioned.
If I do all the image view and CGRect creation in the vC, then it works (code sample 2).
Is this expected behavior? If yes, why is autoresizing not kicking in if the CGRect is obtained from a property in another object?
Full Xcode project code is here (github link)
EDIT
Looks like things don't work if I store the imageView as a property. I was doing this to have quick access to it. Why doesn't it work if imageView is stored as a property?
Code where model is initialized
self.myModel = [[BlockerModel alloc] initWithScreenWidth:self.view.bounds.size.width andHeight:self.view.bounds.size.height];
Model initialization code
-(instancetype) initWithScreenWidth:(CGFloat)width andHeight:(CGFloat)height
{
self = [super init];
if (self)
{
self.screenWidth = width;
self.screenHeight = height;
UIImage* paddleImage = [UIImage imageNamed:#"paddle.png"];
CGSize paddleSize = [paddleImage size];
self.paddleRect = CGRectMake((self.screenWidth-paddleSize.width)/2, (1 - PADDLE_BOTTOM_OFFSET)*self.screenHeight, paddleSize.width, paddleSize.height);
}
return self;
}
Code in VC where imageView is initialized
self.paddleView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:#"paddle"]];
self.paddleView.backgroundColor = [UIColor clearColor];
self.paddleView.opaque = NO;
self.paddleView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
NSLog(#"Paddle rect is %#",NSStringFromCGRect(self.myModel.paddleRect));
[self.paddleView setFrame:self.myModel.paddleRect];
[self.view addSubview:self.paddleView];
If I instead use this code in the VC to initialize imageView things work
UIImage* paddleImage = [UIImage imageNamed:#"paddle.png"];
CGSize paddleSize = [paddleImage size];
CGRect paddleRect = CGRectMake((self.view.bounds.size.width-paddleSize.width)/2, (1 - PADDLE_BOTTOM_OFFSET)*self.view.bounds.size.height, paddleSize.width, paddleSize.height);
UIImageView *paddleView = [[UIImageView alloc] initWithImage:paddleImage];
paddleView.backgroundColor = [UIColor clearColor];
paddleView.opaque = NO;
paddleView.autoresizingMask = UIViewAutoresizingFlexibleTopMargin|UIViewAutoresizingFlexibleLeftMargin|UIViewAutoresizingFlexibleRightMargin|UIViewAutoresizingFlexibleBottomMargin;
[paddleView setFrame:paddleRect];
[self.view addSubview:paddleView];

Found the issue. I was using the model object to handle all my "game object location" logic. E.g VC would calculate the X axis deltas from the touch events & forward them to the model object. Also, CADisplayLink events would be forwarded so that model can update ball location based on velocity and time since last event. It will then use updated location to detect collisions. This split was used because model class also had the methods to detect collisions with sides, paddle/ball etc.
The issue was that the model object was rewriting the CGRect of the paddleView by adding the delta it received from VC to the origin.x of current paddleRect it had stored. This paddleRect did not take into account the automatic adjustment to the CGRect that is done by auto-resizing after a rotation.
The fix was for the VC to set the CGRect of the paddleRect (set to paddleView frame) before calling the method in the model to update all the game properties and detect collisions. This way model only takes care of logic of collusion detection and updating ball movement and velocity based on it. The VC uses current paddleView location and hence automatically accounts for the automatic adjustment to the CGRect that is done by auto-resizing after a rotation.
Source code in github link updated.

Related

Can we only configure UI correctly in viewDidAppear? and not in viewWillAppear/viewDidLoad?

I have a static table view with cells that have a rounded border. I have noticed when testing on different simulators that whilst my auto layout constraints work, the border isn't always the right width. This particular screen consists of a view controller with a UIView containing an embedded tableViewController
I have done some investigating and found that the width of the border actually depends on the width of the storyboard phone. This means if I have a storyboard for an iPhone 8, the 8+ will have cells too short and vice versa, an 8+ storyboard results in cells that are too long (and extend off screen) for the 8.
Currently I am setting the cell borders in the viewDidLoad, here is the code I am using to configure the cells border:
- (void)configureCellThree {
//Add Border
CALayer *borderLayer = [CALayer layer];
CGRect borderFrame = CGRectMake(0, 0, (_contentCellThree.frame.size.width), (_contentCellThree.frame.size.height));
[borderLayer setBackgroundColor:[[UIColor clearColor] CGColor]];
[borderLayer setFrame:borderFrame];
[borderLayer setCornerRadius:_contentCellThree.frame.size.height / 2];
[borderLayer setBorderWidth:1.0];
[borderLayer setBorderColor:[kTextColor2 CGColor]];
// [borderLayer setOpacity:0.5];
[_contentCellThree.layer addSublayer:borderLayer];
}
Now if I run this code within the viewDidAppear, everything will work across both devices. I added some logs to my main view controller to find out how things were being set.
- (void)viewDidLoad {
[super viewDidLoad];
NSLog(#"VDL - SELF.VIEW = %#", self.view);
NSLog(#"VDL - CONTAINER VIEW = %#", self.profileScrollingContainerView);
}
- (void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
NSLog(#"VDA- SELF.VIEW = %#", self.view);
NSLog(#"VDA - CONTAINER VIEW = %#", self.profileScrollingContainerView);
}
I had suspected that viewDidLoad was using the sizing information from the storyboard instead of the view itself (which doesn't seem right). This logging confirms this. If I look at the UIView responsible for displaying my tableview, when the storyboard is set to 8+ it has the following frame attributes: X = 0, Y = 349, W = 414, H = 338. Now lets look at the results of the logging:
VDL - SELF.VIEW = <UIView: 0x7fa545401f10; frame = (0 0; 375 667);
VDL - CONTAINER VIEW = <UIView: 0x7fa545401b50; frame = (0 349; 414 338);
VDA- SELF.VIEW = <UIView: 0x7fa545401f10; frame = (0 64; 375 603);
VDA - CONTAINER VIEW = <UIView: 0x7fa545401b50; frame = (0 285; 375 269);
So when the view loads the tableview is getting the wrong information about the views size. When the viewDidAppear gets called it has the correct sizing of the view and will work properly. My issue here is that I don't want to be calling initialising code in my viewDidAppear.
I read here that I should be putting my UI Geometry code into the viewWillAppear however I have tried this and I get the same issues.
VWA - CONTAINER VIEW = <UIView: 0x7fec04d97700; frame = (0 349; 414 338);
So to consolidate my question, How can I get the properties of my view before the view has loaded/appeared so I can correctly setup my UI?
I have read that I will need to subclass UIView and potentially use setFrame however I don't really know how I'd actually go about doing this.
Subclass UITableViewCell and implement layoutSubviews, i.e. see the docs:
"Subclasses can override this method as needed to perform more precise layout of their subviews. You should override this method only if the autoresizing and constraint-based behaviors of the subviews do not offer the behavior you want. You can use your implementation to set the frame rectangles of your subviews directly."
So it turns out I was using the wrong method to do this. I found this question which solved my whole issue. Basically if you need to perform UI calculations (such as adding custom views) you should be performing them in -(void)viewDidLayoutSubviews. This method is called after the view has worked out all of its sizing and constraints so anything you do in here will be executed using the right properties of your view!

How to get object position and size in UIView objective c

I put UIImageView in my Scene from Object library, and give it an image and defined OUTLET in .h file. Now I want to check its coordinates, or center point, or frame X,Y,Width,Height.
I am using
This
CGRect newFrameSize = CGRectMake(recycleBin.frame.origin.x, recycleBin.frame.origin.y,
recycleBin.frame.size.width, recycleBin.frame.size.height);
or
CGRect newFrameSize = recycleBin.frame;
by using this
NSLog(#"%#", NSStringFromCGRect(newFrameSize));
gives same result that is
2013-01-16 21:42:25.101 xyzapp[6474:c07] {{0, 0}, {0, 0}}
I want its actual position and size when viewcontroller loaded, so when user click on image view it will fadeout by zoom-in towards users and will disappear, and when user tap on reset button, it fadein and zoom-in back to original form (reverse to the previous animation).
Also give me hint, how to perform this animation on UIImageView or any button or label. Thx
Unfortunately, you can't check an item's actual frame as set in IB in -viewDidLoad. The earliest you can check it (that I've found) is by overriding -viewDidAppear:. But, since -viewDidAppear: could be called multiple times throughout the life of the view, you need to make sure you're not saving the frame it's in the modified state.
- (void)viewDidAppear:(BOOL)animated
{
[super viewDidAppear:animated];
if(savedFrame == CGRectZero) {
savedFrame = self.recycleBin.frame;
NSLog(#"Frame: %#", NSStringFromCGRect(savedFrame));
}
}
Where savedFrame is a member variable (or you could make it a property).
From the description of the animation you're wanting, it sounds like adjusting the frame isn't the way to go about it. It sounds like you're wanting to get the effect of the view stretching and fading out (and the reverse when being reset)? If so, some code like this might be more so what you're looking for...
Fade out:
float animationDuration = 2.0f; // Duration of animation in seconds
float zoomScale = 3.0f; // How much to zoom in duration the animation
[UIView animateWithDuration:animationDuration animations:^{
CGAffineTransform transform = CGAffineTransformMakeScale(zoomScale, zoomScale);
self.recycleBin.transform = transform;
self.recycleBin.alpha = 0; // Make fully transparent
}];
And then, to reset the view:
float animationDuration = 2.0f; // Duration of animation in seconds
[UIView animateWithDuration:animationDuration animations:^{
CGAffineTransform transform = CGAffineTransformMakeScale(1.0f, 1.0f);
self.recycleBin.transform = transform;
self.recycleBin.alpha = 1.0; // Make fully opaque
}];
You can play around with the numbers to see if you get the effects you desire. Most animations in iOS are actually extremely simple to do. This code would work for any UIView subclass.
It sounds as if your IBOutlet is not attached to your class.
Open up your view controller header file (if that is where you property declaration is) and look beside the declaration:
Notice how on the first IBOutlet, the circle (to the left of the line number) is filled in. This means that it is connected to your scene. However, the second one is not (the circle is not filled in).

Draw several UIViews on one layer

Is it possible to draw several UIViews with custom drawing on single CALayer so that they dont have backing store each?
UPDATE:
I have several uiviews of the same size which have same superview. Right now each of them has custom drawing. And because of big size they create 600-800 mb backing stores on iPad 3. so i want to compose their output on one view and have several times less memory consumed.
Every view has it's own Layer and you can't change that.
You could enable shouldRasterize to flatten a view hierarchy, which might help in some cases, but that needs gpu memory.
another way could be to create an image context and merge the drawings into the image and set that as the layer content.
in one of the wwdc session videos of last year which was about drawing showed a drawing app where many strokes were transfered into a image to speed up drawing.
Since the views will share the same backing store, I assume you want them to share the same image that results from the layer's custom drawing, right? I believe this can be done with something similar to:
// create your custom layer
MyCustomLayer* layer = [[MyCustomLayer alloc] init];
// create the custom views
UIView* view1 = [[UIView alloc] initWithFrame:CGRectMake( 0, 0, layer.frame.size.width, layer.frame.size.height)];
UIView* view2 = [[UIView alloc] initWithFrame:CGRectMake( 100, 100, layer.frame.size.width, layer.frame.size.height)];
// have the layer render itself into an image context
UIGraphicsBeginImageContext( layer.frame.size );
CGContextRef context = UIGraphicsGetCurrentContext();
[layer drawInContext:context];
UIImage* image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// set the backing stores (a.k.a the 'contents' property) of the view layers to the resulting image
view1.layer.contents = (id)image.CGImage;
view2.layer.contents = (id)image.CGImage;
// assuming we're in a view controller, all those views to the hierarchy
[self.view addSubview:view1];
[self.view addSubview:view2];

How do I transfer the frame *and* transform from one UIView to another without distortion?

I have a UIView that may have scale and/or rotation transforms applied to it. My controller creates a new controller and passes the view to it. The new controller creates a new view and tries to place it in the same location and rotation as the passed view. It sets the location and size by converting the original view's frame:
CGRect frame = [self.view convertRect:fromView.frame fromView:fromView.superview];
ImageScrollView *isv = [[ImageScrollView alloc]initWithFrame:frame image:image];
This works great, with the scaled size and location copied perfectly. However, if there is a rotation transform applied to fromView, it does not transfer.
So I added this line:
isv.transform = fromView.transform;
That nicely handles transfers the rotation, but also the scale transform. The result is that the scale transform is effectively applied twice, so the resulting view is much too large.
So how do I go about transferring the location (origin), scale, and rotation from one view to another, without doubling the scale?
Edit
Here is a more complete code example, where the original UIImageView (fromView) is being used to size and position a UIScrollView (the ImageScrollView subclass):
CGRect frame = [self.view convertRect:fromView.frame fromView:fromView.superview];
frame.origin.y += pagingScrollView.frame.origin.y;
ImageScrollView *isv = [[ImageScrollView alloc]initWithFrame:frame image:image];
isv.layer.anchorPoint = fromView.layer.anchorPoint;
isv.transform = fromView.transform;
isv.bounds = fromView.bounds;
isv.center = [self.view convertPoint:fromView.center fromView:fromView.superview];
[self.view insertSubview:isv belowSubview:captionView];
And here is the entirety of the configuration in ImageScrollView:
- (id)initWithFrame:(CGRect)frame image:(UIImage *)image {
if (self = [self initWithFrame:frame]) {
CGRect rect = CGRectMake(0, 0, frame.size.width, frame.size.height);
imageLoaded = YES;
imageView = [[UIImageView alloc] initWithImage:image];
imageView.frame = rect;
imageView.contentMode = UIViewContentModeScaleAspectFill;
imageView.clipsToBounds = YES;
[self addSubview:imageView];
}
return self;
}
It looks as though the transform causes the imageView to scale up too large, as you can see in this ugly video.
Copy the first view's bounds, center, and transform to the second view.
Your code doesn't work because frame is a value that is derived from the bounds, center, and transform. The setter for frame tries to do the right thing by reversing the process, but it can't always work correctly when a non-identity transform is set.
The documentation is pretty clear on this point:
If the transform property is not the identity transform, the value of this property is undefined and therefore should be ignored.
...
If the transform property contains a non-identity transform, the value of the frame property is undefined and should not be modified. In that case, you can reposition the view using the center property and adjust the size using the bounds property instead.
Let say viewA is first view that contains frame & transform, and you want to pass those values to viewB.
So that, you need to get the original frame of viewA and pass it to viewB before pass the transform. Otherwise, viewB's frame will be changed 1 more time on when you add transform.
To get the original frame, just make viewA.transform to CGAffineTransformIdentity
Here is code
CGAffineTransform originalTransform = viewA.transform; // Remember old transform
viewA.transform = CGAffineTransformIdentity; // Remove transform so that you can get original frame
viewB.frame = viewA.frame; // Pass originalFrame into viewB
viewA.transform = originalTransform; // Restore transform into viewA
viewB.transform = originalTransform; // At this step, transform will change frame and make it the same with viewA
After that, viewA & viewB will have the same UI on superView.

dynamically creating UIImageViews in a custom class method call in xcode

i want to dynamically create and add to the view a uiimageview when a touch is registered within the frame of another uiimageview. i want these to stay on the screen and remain moveable when a touch is registered inside the frame of each uiimageview. here is the code i have now but it doesn't quite work right and is really glitchy(i.e. i can pick up a new view and it is created but it doesn't follow the touch properly as the touch is moved:
image = [UIImage imageNamed:#"1.png"];
UIImageView *onePieceCopy = [[UIImageView alloc] initWithImage:image];
onePieceCopy.frame = CGRectMake(currentPos.x, currentPos.y, 75, 75);
[self addSubview:onePieceCopy];
if (CGRectContainsPoint([onePieceCopy frame], position)) {
onePieceCopy.center = position;
[self bringSubviewToFront:onePieceCopy];
}
[onePieceCopy release];
this code is in a switch statement that is in a method that is called when a touch is registered inside the frame of a designated UIImageView.
the ideal result would be a system not unlike a map editor where you drag parts from a 'staging' area and onto the map, if that makes any sense. does anyone know how to do this or how i can better my code to get the desired result
WWDC 2011 (must be a registered developer). Watch "Session 104 - Advanced Scroll View Techniques". At about minute 40 they do a nice pick and place of a uiimage.