This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
UIImageView Transform Scale
I am trying to achieve something similar to the screenshot down below.
I have tried to do this by simply changing the view's frame property and increasing its width and height but it
1) does not really enlarge the view and its subviews it just increases the size
2) It only increases its size from the left so it does not look even
Also is there a way I can have this animated? I could just place it in UIView's animateWithDuration: block right?
There are 2 approaches you can take -
If you are using standard UITableView then simply use Managing the Reordering of Rows
If you are not using UITableView and are doing something more custom with UIViews then
what you need is a property called scale. Scaling a view is done using Core Graphics Affine transforms when you're looking to affect the UIView (as opposed to the layer which use Core Animation transforms).
To scale a view use - this scales the view to 2x.
// 2x
[yourView setTransform:CGAffineTransformMakeScale(2.0, 2.0)];
To translate, use
// Move origin by 100 on both axis
[yourView setTransform:CGAffineTransformMakeTranslation(100.0, 100.0)];
You can also play around with alpha to set it to say 0.5 to give it that extra cool look you have put in the screenshot.
To animate these, wrap them in an animation block. If you want to transform the view with both of these, then you need to concatenate them.
apply a transformation to the view's layer :)
that can be also be animated :) in fact, it implicitly is!
self.view.transform = CGAffineTransformScale(self.view.transform, 2,2);
Related
I have UIScrollView and number of objects (UIView compositions) with UIImageViews inside them. Some of UIImageViews has round border (I use myImageView.layer.masksToBounds = YES; for this). Other has rectangle borders and part of image in them (I use Clip subviews property in Interface Builder for this).
The issue is that I found that clip properties strongly affect the performance while scrolling:
For iPod touch (4th generation) results of profiling:
with enabled clip properties (both or one of them) I have around 30 fps while scrolling
with disabled clip properties I have all 60 fps while scrolling
I really need to clip some images to round bounds and other to rectangle bounds (to show part of image). So, here is my question: what ways there are to improve performance? May be there are low level ways to do it (drawRect: or something), or may be it would be useful to play around alfa masking or I just do something wrong?
When you have graphically intensive masks and things, a simple and easy way to improve performance (often times dramatically) is to set shouldRasterize to YES on the layer for that item:
#import <QuartzCore/QuartzCore.h>
// ...
view.layer.shouldRasterize = YES;
This will rastersize the view into a buffer, so it isn't constantly re-rendered. This will take up a extra memory for each view, so you should really try and recycle/reuse views as you scroll, similar to how a table view does.
For correct behaviour on retina display you also need to set an appropriate value for rasterizationScale:
view.layer.rasterizationScale = view.window.screen.scale; // or [UIScreen mainScreen]
I've had great success with this for things like scrolling photo galleries, where each item had rounded corners, shadows, etc.
I can't find an answer for this one.
I would like to know how to have the image size in a calayer's to be lower than calayer's bound's size.
I've got several pawns in an iPad game, each is a CALayer and I have them resize simply with a contentsGravity=kCAGravityResizeAspect. Image is 128x128 inside of a CALayer of 30x30 so the image gets resized automatically to 30x30 and because of both being a box, aspect ratio maintains and works.
Here I set CALayer's bounds proportional relative to superview's size, so the Pawns always present the same relative size to the view. This one is inside my sprite class subclass of calayer:
-(void) setSpriteScaleToDice {
CGFloat newSize = [self superlayer].bounds.size.width * 0.066666667f;
self.bounds=CGRectMake(0.0f, 0.0f, newSize, newSize);
self.contentsGravity = kCAGravityResizeAspect;
}
Note that in my case the CALayer bounds gets a maximum of 30x30 which is small for a touch. That's the problem I'm facing, due to this small size it's difficult to "touch" them, sometimes touch fails...
One of the ideas that I'm thinking is to increase the "bounds" of the calayer, while keeping the image at its original size. The problem is that I've search a lot and tried several options with contentsGravity, contentsCenter, contentsScale, etc... without success.
In particular, as per apple docs looks like the way to go is with contentsCenter (and not using contentsGravity), however I get deformation in the bitmap...
Please, any idea is really welcome, and thanks in advance,
Luis
This is probably a silly question, but why are you using CALayers for this instead of UIViews? UIImageView has a contentMode property that lets you do this easily (not to mention being easier to use for touch event handling).
That said, CALayer has a contentsRect property that appears to let you define a sub-rectangle for contents to be drawn within, so that may let you do what you want.
Another option would be to place your image layer inside a larger layer and use that for the hit test.
CAlayer CGFloat contentsScale
/* Defines the scale factor applied to the contents of the layer. If
* the physical size of the contents is '(w, h)' then the logical size
* (i.e. for contentsGravity calculations) is defined as '(w /
* contentsScale, h / contentsScale)'. Applies to both images provided
* explicitly and content provided via -drawInContext: (i.e. if
* contentsScale is two -drawInContext: will draw into a buffer twice
* as large as the layer bounds). Defaults to one. Animatable. */
If you want your image drawn in the CALayer at a size other than the CALayer you need to create your own drawInContext: method and draw the image rather than setting the CALayer's contents property. Do not set the contents property, create your own to track the image you want to draw.
I have a problem understanding how the parameter passed to the drawInRect method is defined when a rotation transformation is performed on a UIView.
To give an example I have a UIView which I rotated with an angle of 307 degree.
In the drawInRect method I log the following:
self.frame: {{103.932, 273.254}, {64.3007, 84.3374}}
rect (the variable passed as parameter:{{0, 0.102913}, {18, 89}}
The problem is that according to the documentation I should not draw outside of rect, but considering what I should draw, there is no way my images will fit there.
Can anyone explain to me how I am supposed to use drawInRect in the case my UIView is rotated ?
To give more detail about my problem, here is what I do:
I have a scrollview with a contentView inside (subclassed). I add my UIViews in the content view.
The views in question are composed of a handler image (bottom left) and the main image (top right). Users are supposed to grab the view by pressing the handler but that's not the point.
The drawInRect method of the UIView contains the following:
[_image drawInRect:CGRectMake(handlerSize.width, 0, _image.size.width, _image.size.height)];
CGSize size = CGSizeMake(kHandPickerWidth/self.scrollViewScale, kHandPickerHeight/self.scrollViewScale);
[_handPickerImage drawInRect:(0, _image.size.height, size.width, size.height)];
The UIViews objects are added at viewWillAppear in the content view doing the following:
first instanciate,
then addSubview:
then I set the scrollViewScale parameter,
then I set the frame parameter (according to the top right image displayed (which may vary)
then I rotate the UIView.
Starting from line three, the code is executed from the
- (void)scrollViewDidZoom:(UIScrollView *)scrollView
to make sure every variable is set properly when displayed is needed.
The line defining the size variable is to adjust the marker's size no mater the zoomScale value of the scroll view.
You basically just draw as you would normally, and the painting will be rotated by iOS for you. You can get this transformation information if you would want to.
You need to get a reference to the currect graphics context:
CGContextRef ctx = UIGraphicsGetCurrentContext();
Then query the transformation matrix directly:
CGAffineTransform tf = CGContextGetCTM(ctx);
NSLog(#"current ctm: %#",NSStringFromCGAffineTransform(tf));
or better, get the transfrom from your drawing function to the device:
CGAffineTransform tf = CGContextGetUserSpaceToDeviceSpaceTransform(ctx);
NSLog(#"user->device transform: %#",NSStringFromCGAffineTransform(tf));
And in drawRect: you should not rely too much on the passed CGRect, because it serves mostly as a hint to what piece of the view needs updating. (e.g. because you called -setNeedsDisplayInRect: on it). To get the actual bounds where your view lives in, use self.bounds.
Drawing outside of the CGRect is no real problem, but will only hurt performance a little.
edit
ps. to clarify: self.frame is the frame of your view in the parent view coordinate system. It changes if you move, rotate or otherwise transform the view. self.bounds is the frame of your view in its own coordinate system, and (therefore) remains constant under changes of position or transformations.
So I found a solution to my problem:
I was setting the frame parameter multiple times with some CGAffineTransformation defined which is not supposed to be done.
Now each time I need to reset the frame I reset the affine transform, change the frame and set the back the affine transform.
Everything works as supposed to this way.
I want to develop an iPhone application using the utility template, where the flip side is semi-transparent displaying the contents of the main view, flipped horizontally.
Edit: To create the illusion of semi-transparent flip side, I'd display the same content on the flip view as in the main view, but mirror the content and lower the alpha.
Is it possible to display a text using an UILabel, but mirror the text, ie flip it horizontally? Apple dev pages does not give me any help in this issue.
As August said, I'm not sure of your use case on this, but there's a reasonably straightforward way to do it using Core Animation. First, you'll need to add the QuartzCore framework to your project and do a
#import <QuartzCore/QuartzCore.h>
somewhere in your headers.
Then, you can apply a rotational transform to your UILabel's underlying layer using the following:
yourLabel.layer.transform = CATransform3DMakeRotation(M_PI, 0.0f, 1.0f, 0.0f);
which will rotate the label's CALayer by 180 degrees (pi radians) about the Y axis, producing the mirror effect you're looking for.
In the Utility template, the flipside isn't shown semi-transparent. This is most likely for performance reasons on the iPhone. I'm not sure there's any value to having your label flipped if it's not providing any functionality.
That said, I'd look into UIView's transform property.
I'm trying to draw a graph that is indefinitely large horizontally, and the same height as the screen. I've added a UIScrollView, and a subclass of a UIView within it, which implements the -drawRect: method. In the simulator, everything works fine, but on the device, it can't seem to draw the graph after it reaches a certain size.
I'm already caching pretty much everything I can, and basically only calling CGContextAddLineToPoint in the -drawRect: section. I'm only drawing what's visible on the screen. I have a delegate to the UIScrollView which listens for -scrollViewDidScroll: which then tells the graph to redraw itself ([graphView setNeedsDisplay]).
I found one method that tells me to override the +layerClass method and return [CATiledLayer class]. This does allow the graph to actually draw on the device, but it functions very poorly. It's incredibly slow to actually draw, and the fade in that occurs is undesirable.
Any suggestions?
Well, here's my answer: I basically did something similar to how the UITableView works with cells: I have an NSMutableSet of GraphView objects which store unused graphs. When a section of the scroll view becomes visible, I take a graph view from that set (or make a new one if the set is empty). It already had a scrollX property to determine which part of it was supposed to draw. I set the scrollX property to the correct value and, instead of using the screen width, I gave it an arbitrary width to draw. When it goes out of the scroll view, it is removed from the UIScrollView and added to the set.
I wonder though if I really even need to remove them when they go outof the view? It may be prudent to try leaving them in and remove the ones not on screen only if I get a low memory warning? This might get rid of the pause whenever it needs to redraw a section of graph that hasn't changed.
My saving grace here was that my GraphView already was set up to draw only a portion of the graph. All I needed to do then was just make more than one of them.
I think this is a limitation of the iPhone graphics hardware. Through experimentation, I have seen that the iPhone will refuse to draw a frame that is bigger than 2000 pixels in either height or width. It probably has something to do with limited size for frame buffers in hardware.
Watch the 2011 WWDC session video entitled "Session 104 - Advanced Scroll View Techniques".
Thanks, that's helpful. One question -- what did you use for the contentSize of the UIScrollView? Does UIScrollView tolerate large content sizes (over 2000 px) as long as you're not creating buffers to fill the entire space in your content view? Or are you keeping the UIScrollView a constant size (say, 2 screen widths) and resting the UIScrollView contentOffset property each time you draw (using scrollX instead of the contentOffset to store your position)?
I think I answered my own question (the latter seems like a better alternative), heh, but I'll go ahead and post this in case other people need clarification.