I'm building news feed list table using UITableView. For each row I'm creating rectangle cell with shadow using QuartzCore layer.shadow
Sample code inside custom UITableViewCell class:
-(void)drawRect:(CGRect)rect
{
UIView *bgView = [[UIView alloc] initWithFrame:self.bounds];
bgView.backgroundColor = [UIColor whiteColor];
bgView.layer.shadowColor = [UIColor blackColor].CGColor;
bgView.layer.shadowOffset = CGSizeMake(0, 1);
bgView.layer.shadowOpacity = 0.2;
bgView.layer.shadowRadius = 1;
self.backgroundView = bgView;
[bgView release];
}
When I test app, scroll UITableView, the scrolling performance is bad! If I remove shadow, performance is good!
I need your advices. What kind of optimizations I can do in order to get best performance?
Your problem isn't Objective-C but the shadow! Since iOS 3.2 you can define a CGPathRef for the shadow, you should build one which includes just the outline of your view to reduce rendering time and increase performance. You could also have the shadow rasterize to avoid redrawing it all the time (set the shouldRasterize property of your layer to YES. Depending on what you want to do with the layer, this might not be the best looking option. And its also a memory/performance trade off, keep that in mind!).
The most easiest way to create the needed shadow path should be via the UIBezierPath class which has a lot of useful class methods to build various formed CGPathRef objects, but depending on the shape of your view, you might have to fall back to build your own path by hand.
how do you implement the UITableView delegate?
Have you created the UITableViewCell using reuse, such like:
-[UITableView dequeueReusableCellWithIdentifier:]
Please check it.
Related
Here's my drawRect code:
- (void)drawRect:(CGRect)rect {
if (currentLayer) {
[currentLayer removeFromSuperlayer];
}
if (currentPath) {
currentLayer = [[CAShapeLayer alloc] init];
currentLayer.frame = self.bounds;
currentLayer.path = currentPath.CGPath;
if ([SettingsManager shared].isColorInverted) {
currentLayer.strokeColor = [UIColor blackColor].CGColor;
} else {
currentLayer.strokeColor = [UIColor whiteColor].CGColor;
}
currentLayer.strokeColor = _strokeColorX.CGColor;
currentLayer.fillColor = [UIColor clearColor].CGColor;
currentLayer.lineWidth = self.test;
currentLayer.contentsGravity = kCAGravityCenter;
[self.layer addSublayer:currentLayer];
//currentLayer.position = CGPointMake(CGRectGetMidX(self.frame), CGRectGetMidY(self.frame));
}
//[currentPath stroke];
}
"The portion of the view’s bounds that needs to be updated"
That's from the apple dev docs.
I am dealing with a UIView and a device called a slate -the slate has the ability to record pencil drawings to my iOS app. I managed to get it working on the entire view. But, I dont want the entire view to be filled with the slate input. Instead, I'd like to modify the UIView to have a height of phonesize - 40px; Makes sense?
I found out if I set the frame that I have as currentLayer to new bounds the screen that can be drawn on is resized accordingly.
I think you’re mixing up the meaning of the various rectangles in UIView. The drawRect parameter designates a partial section of the view that needs to be redrawn. This can be that you called setNeedsDisplay(rect), or this can be because iOS thinks it needs to be redrawn (e.g. because your app‘s drawings were ignored because the screen was locked, and now the user unlocked the screen and current drawings are needed).
This rectangle has nothing to do with the size at which your contents are drawn. The size of the area your view is drawn in is controlled by its frame and bounds, the former of which is usually controlled using Auto Layout (i.e. layout constraint objects).
In general, while in drawRect, you look at the bounds of your view to get the full rectangle to draw in. The parameter to drawRect, on the other hand, is there to allow you to optimize your drawing to only redraw the parts that actually changed.
Also, it is in general a bad idea to manipulate the view and layer hierarchies from inside drawRect. The UIView drawing mechanism expects you to only draw your current view‘s state in there. Since it recursively walks the list of views and subviews to call drawRect, you changing the list of views or layers can cause weird side effects like views being skipped until the next redraw.
Create and add new layers in your state-changing methods or event handling methods, and position them using auto layout or from inside layoutSubviews.
How to make a distance between the image and the text closer?
In ios 6 the same code displays normally, but in the new version of the distance increased.
I do everything by standard methods
[cell.textLabel setText: .....
// cell.indentationLevel = 0; // this is for test
UIImage * img = [UIImage imageNamed: # "image10.png"];
NSData * imageData = UIImagePNGRepresentation (img);
cell.imageView.image = [UIImage imageWithData: imageData];
attached image
https://www.dropbox.com/s/lt119xgozvzzstd/image01.png
If I remember correctly, the indentation level only affects the text. So when you're changing the indentation level/width you are in fact changing the distance between the imageView and the textLabel.
I tested this an unfortunately you can't make the indentationWidth negative. Thus you will need to subclass the UITableViewCell and set it up with your own views (I would suggest doing that in the Storyboard, with autolayout and make the indentation into a NSLayoutConstraint so that you have a property on your cell indentationConstant, for example which you can change). I have done this for a slightly different purposes, I needed to indent the whole contentView. No way around subclassing and creating custom views in Storyboard, sorry. (I am not even going to suggest some terrible hacks of accessing the subviews and modifying their frame, everything is autolayout today).
I have searched apple's documentation and other posts on Stack Overflow, but I'm still having trouble adding a shadow to the inside of a UITextView. I would like to make it look like a UITextField. Here's the code that I've tried.
CALayer *frontLayer = self.frontField.layer;
[frontLayer setBorderColor:CGColorCreate(CGColorSpaceCreateDeviceGray(), nil)];
[frontLayer setBorderWidth:1];
[frontLayer setCornerRadius:5];
[frontLayer setShadowRadius:10.0];
CGSize shadowOffset = {0.0,3.0};
[frontLayer setShadowOffset:shadowOffset];
[frontLayer setShadowOpacity:1];
self.frontField.clipsToBounds = YES;
Where am I going wrong?
Start off simple and try this:
[myTextView.layer setShadowColor:[[UIColor blackColor] CGColor]];
[myTextView.layer setShadowOffset:CGSizeMake(1.0, 1.0)];
[myTextView.layer setShadowOpacity:1.0];
[myTextView.layer setShadowRadius:0.3];
[myTextView.layer.masksToBounds = NO]; //<-- for UITextView!
to optimise performance also add:
view.layer.shadowPath = [UIBezierPath bezierPathWithRect:myTextView.bounds].CGPath;
Then you can add your other properties back in 1 by 1 and see what is causing an issue for you.
According to 25 iOS performance tips & tricks, adding shadow by setting shadowOffset is an expensive operation and affects performance.
Core Animation has to do an offscreen pass to first determine the
exact shape of your view before it can render the drop shadow, which
is a fairly expensive operation.
You can use instead:
myTextView.layer.shadowPath = [[UIBezierPath bezierPathWithRect:view.bounds] CGPath];
I am trying to add an overlay image to a photo that is taken. Has anyone seen any examples on how I can do this? I want to have a picture which is a transparent PNG file, and then allow the user to take a picture with the image in it.
Iulius is correct that this is essentially a duplicate question. However, just to rule out one issue-- would you like the user to be able to see the overlay while composing the shot? (i.e. if your app makes different hats appear on people's heads, do you want to show the hat floating in space while they take the photo?). If you want to learn more about that, you'll need to use the cameraOverlayView property of the imagePickerController, which lets you superimpose your own view(s) on the camera. There are questions on this topic already on SO, like this one: How to add a overlay view to the cameraview and save it
Update re: scaling-- LilMoke, I assume when you say that the image is offset you're getting into trouble with the difference with the camera's aspect ratio (4:3) and the screen of the iPhone (3:4). You can define a constant and use it to set the cameraViewTransform property of your UIImagePickerController. Here's a code snippet, partially borrowed, and simplified from the excellent augmented reality tutorial at raywenderlich.com:
#define CAMERA_TRANSFORM 1.24299
// First create an overlay view for your superimposed image
overlay = [[UIView alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
overlay.backgroundColor=[UIColor clearColor];
overlay.opaque = NO;
UIImagePickerController *imagePicker;
imagePicker = [[[UIImagePickerController alloc] init] autorelease];
imagePicker.sourceType = UIImagePickerControllerSourceTypeCamera;
imagePicker.showsCameraControls = YES; // assuming you need these?
imagePicker.toolbarHidden = YES;
imagePicker.navigationBarHidden = YES;
imagePicker.wantsFullScreenLayout = YES;
imagePicker.cameraViewTransform = CGAffineTransformScale(imagePicker.cameraViewTransform,
CAMERA_TRANSFORM, CAMERA_TRANSFORM); // If I understood your problem, this should help
imagePicker.cameraOverlayView = overlay;
If code along these lines doesn't get you on track, then maybe you can post all the relevant code from your troubled project here. Hopefully it's just a matter of setting the cameraViewTransform as I said above.
In viewWillAppear I want to adjust some of my views using the following transformation (Monotouch code):
CATransform3D oTransform3D = CATransform3D.Identity;
oTransform3D.m34 = 1.0f / -400;
oTransform3D = oTransform3D.Translate( 110, 0, 0);
oTransform3D = oTransform3D.Rotate( (-70f) * (float)Math.PI / 180f, 0, 1, 0);
However, this causes the view to be rendered far left of the screen.
If I put the very same code in viewDidAppear, it is working.
I have already checked that all views have valid sizes.
Did you ever figure this out? I've run into a similar situation where a 3D transform refuses to behave when applied to an immediate subview of a UIViewController's root view. A workaround for me was to wrap the subview in another container UIView.
Something along these lines:
- (void)viewDidLoad {
[super viewDidLoad];
UIView *container = [[UIView alloc] initWithFrame:CGRectMake(0,0,self.view.frame.size.width,self.view.frame.size.height)];
container.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoResizingFlexibleHeight;
[self.view addSubview:container];
self.myCrazyFlippingSpinningView = [[CrazyFlippingSpinningView alloc] init];
[container addSubview:self.myCrazyFlippingSpinningView];
}
Of course, I'd prefer to not have to do that. It's such a hack. If anyone's encountered this and has a better solution, I'd love to see it.
Thanks!
Not sure I fully understand the question but I am going to take a swag.
CATransform3D has a vanishing point that transforms refer to. The vanishing point is always the x,y center point of whatever view you place the transformed view onto. When I was struggling through the 3D transform issues and getting unexpected results, it was always solved by simply making sure I knew the size and center point of the container view.
You say your code works in ViewDidLoad...does that mean you are adding a transformed view to the base view? like: this.View.AddSubview(yourTransformedView); ??
I mean, I don't think your problem is a matter of ViewWillAppear vs. ViewDidLoad; I am going to guess that if I saw the code where you apply your transform to the views and then add your views to a base view or some other superview, I would be able to quickly tell you what is happening.