I have set up a very simple Single View Application (in Swift) using iOS 8.1. I have added one UIImageView to the main view controller view. I am trying to use CAKeyframeAnimation to animate a sequence of images. I was originally using the UIImageView animationImages property which worked fine but I need to be able to know precisely when the animation finishes, hence the move to CAKeyframeAnimation.
My code is as follows:
class ViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
var animation : CAKeyframeAnimation!
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
let animationImages:[AnyObject] = [UIImage(named: "image-1")!, UIImage(named: "image-2")!, UIImage(named: "image-3")!, UIImage(named: "image-4")!]
animation = CAKeyframeAnimation(keyPath: "contents")
animation.calculationMode = kCAAnimationDiscrete
animation.duration = 25
animation.values = animationImages
animation.repeatCount = 25
animation.removedOnCompletion = false
animation.fillMode = kCAFillModeForwards
self.imageView.layer.addAnimation(animation, forKey: "contents")
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
The problem is that the animation does not display any images and I just receive a blank screen. Is there something I am missing in the above code? How can I get the animation to display?
This line is never going to work:
animation.values = animationImages
Change it to:
animation.values = animationImages.map {$0.CGImage as AnyObject}
The reason is that you are trying to animate the "contents" key of this layer. But that is the contents property. But the contents property must be set to a CGImage, not a UIImage. Your animationImages, by contrast, contains UIImages, not CGImages.
Thus you need to transform your array of UIImage into an array of CGImage. Moreover, you're trying to pass this array to Objective-C, where an NSArray must contain objects only; and since CGImage is not an object in Objective-C's mind, you need to cast each of them as an AnyObject. And that is what my map call does.
Related
I am trying to custom annotate images on image view using CAShapeLayers. This include adding shapes(like rectangle, ellipse, line) over the image view, touch them, move them around and pinch-zoom them. I am able to recognise the touch of the layer over the image view, but unsuccessful when tried to move it around. As I try to move the layer, it flies to a corner.
Code and output as below:
class ViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
}
#IBAction func drawRectangle(_ sender: UIButton) { // to initially place the shape at the centre of the image view.
drawRectangleWithRect(CGRect(x: imageView.bounds.midX - 100, y: imageView.bounds.midY - 100, width: 200, height: 200))
}
func drawRectangleWithRect(_ rect: CGRect) {
let path = UIBezierPath(rect: rect)
let shapeLayer = CAShapeLayer()
shapeLayer.path = path.cgPath
//shapeLayer.bounds = (shapeLayer.path?.boundingBox)!
shapeLayer.strokeColor = UIColor.black.cgColor
shapeLayer.fillColor = nil
shapeLayer.lineWidth = 1
imageView.layer.addSublayer(shapeLayer)
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first
let point = touch!.location(in: imageView)
guard let sublayers = imageView.layer.sublayers else { return }
for sublayer in sublayers {
if let sublayer = sublayer as? CAShapeLayer, let path = sublayer.path {
//if path.contains(point) {
sublayer.position = point
//}
}
}
}
}
Tapping on Rectangle button creates the layer at the centre of image view as above. But when I start to touch and move the layer just a bit it goes off to the right bottom corner and disappears.
Now tweaked a bit - uncommented the line shapeLayer.bounds = (shapeLayer.path?.boundingBox)! inside the method drawRectangleWithRect(_ rect: CGRect). Then tapping on the Rectangle button placed the layer at the left top of the view and not at the centre :( . But moving it around now made the layer move around with my finger (though there is a delay). But also that this made the layer move even when image view is touched outside the layer. I understand that some if condition inside the touchesMoved method is needed that properly recognises the layer area but the commented one in the above code did not work properly.
What am I missing to properly place the layer at centre of the view and move it around without any delay ?
Can I actually pinch zoom the layers effectively ?
What is the best recommended way to annotate images like this ? Adding layers with touch recognition or adding shapes as subviews with pan gestures ?
I have a UITableViewCell with constraints so that the cells layout correctly on iPhone 6 and iPhone 6+.
The cell contents correctly resize correctly. However, I need to draw a sublayer gradient over one of the views inside. In order to do so, I'm using awakeFromNib to draw the sublayer.
AwakeFromNib is giving the size of the frame prior to it autoresizing and therefore the gradient subview isn't the right size.
If I use layoutSubviews, the first time it's called, the size is also wrong and the cell needs to be scrolled before it has the right size; so that method also isn't working
What method should I be using to get the right frame size in order to draw correctly?
This code worked for me, is written in Swift.
import UIKit
import QuartzCore
class DemoTableViewCell: UITableViewCell {
#IBOutlet weak var gradientView: UIView!
private let gl = CAGradientLayer()
override func awakeFromNib() {
super.awakeFromNib()
// set gradient
let colors: [AnyObject] = [
UIColor.clearColor().CGColor,
UIColor(white: 0, alpha: 0.8).CGColor
]
gl.colors = colors
gradientView.layer.insertSublayer(gl, atIndex: 0)
}
override func layoutSubviews() {
super.layoutSubviews()
// update gradient
gl.frame = gradientView.bounds
}
}
during awakeFromNib the frame layouts do not appear. On your custom Cell you can either do this
-(void)drawRect:(CGRect)rect
{
[super drawRect:rect];
// Enter Custom Code
}
Or you can override applyLayoutAttributes method like this
-(void)applyLayoutAttributes:(UICollectionViewLayoutAttributes *)layoutAttributes
{
[super applyLayoutAttributes:layoutAttributes];
// Enter custom Code here
}
applyLayoutAttributes is called before drawRect method but still has the size parameteres of rect.
#angshuk answer is perfect where if anyone wants to modify UI inside UITableViewCell. Like in my case I was trying to add a different layer on runtime to an UIView, placed inside UITableViewCell using storyboard/autolayout. Tried the same thing inside awakeFromNib() & layoutSubviews() also. Nothing worked.
Here is the same code using Swift. By the way I am using Xcode 7.3.1 with Swift 2.2.
import UIKit
class InviteMainCellClass: UITableViewCell {
#IBOutlet weak var shareCodeLblSuperView: UIView! //Custom view added using IB/Autolayout
override func awakeFromNib() {
super.awakeFromNib()
//shareCodeLblSuperView.addDashedBorderToView(self.shareCodeLblSuperView, viewBorderColor: UIColor.whiteColor(), borderWidth: 1.0) //Not worked
}
override func drawRect(rect: CGRect) {
super.drawRect(rect)
shareCodeLblSuperView.addDashedBorderToView(self.shareCodeLblSuperView, viewBorderColor: UIColor.whiteColor(), borderWidth: 1.0) //Worked
}
override func layoutSubviews() {
super.layoutSubviews()
//shareCodeLblSuperView.addDashedBorderToView(self.shareCodeLblSuperView, viewBorderColor: UIColor.whiteColor(), borderWidth: 1.0)//Not worked
}
Hope this helped. Thanks.
Looks like simple task. But when I try to resize using setFrame method I got glitches. There are some other UIViews resized using setFrame method and it works perfectly. I made custom application with slider bar and map view. SlideBar changes X position for MKMapView, but keeps width equals to screen width. This approach works fine for all Views. But MKMapView resizes with smooth troubles. Can anyone please give a clue why it's happening and how to solve it?
I had the same problem, which seems to occur only on iOS 6 (Apple Maps) and not on iOS 5 (Google Maps).
My solution was to take a "screenshot" of the map when the user start dragging the divider handle, replace the map with this screenshot during the drag, and put the map back when the finger is released.
I used the code from How to capture UIView to UIImage without loss of quality on retina display for UIView screenshot and Nikolai Ruhe's answer at How do I release a CGImageRef in iOS for a nice background color.
My UIPanGestureRecognizer action is something like this (the (MKMapView)self.map is a subview of (UIView)self.mapContainer with autoresizing set on Interface Builder):
- (IBAction)handleMapPullup:(UIPanGestureRecognizer *)sender
{
CGPoint translation = [sender translationInView:self.mapContainer];
// save current map center
static CLLocationCoordinate2D centerCoordinate;
switch (sender.state) {
case UIGestureRecognizerStateBegan: {
// Save map center coordinate
centerCoordinate = self.map.centerCoordinate;
// Take a "screenshot" of the map and set the size adjustments
UIImage *mapScreenshot = [UIImage imageWithView:self.map];
self.mapImage = [[UIImageView alloc] initWithImage:mapScreenshot];
self.mapImage.autoresizingMask = self.map.autoresizingMask;
self.mapImage.contentMode = UIViewContentModeCenter;
self.mapImage.clipsToBounds = YES;
self.mapImage.backgroundColor = [mapScreenshot mergedColor];
// Replace the map with a screenshot
[self.map removeFromSuperview];
[self.mapContainer insertSubview:self.mapImage atIndex:0];
} break;
case UIGestureRecognizerStateChanged:
break;
default:
// Resize the map to the new dimension
self.map.frame = self.mapImage.frame;
// Replace the screenshot with the resized map
[self.mapImage removeFromSuperview];
[self.mapContainer insertSubview:self.map atIndex:0];
// Empty screenshot memory
self.mapImage = nil;
break;
}
// resize map container according do the translation value
CGRect mapFrame = self.mapContainer.frame;
mapFrame.size.height += translation.y;
// reset translation to make a relative read on next event
[sender setTranslation:CGPointZero inView:self.mapContainer];
self.mapContainer.frame = mapFrame;
self.map.centerCoordinate = centerCoordinate; // keep center
}
I have a MKAnnotationView that the user can drag around the map.
It is very difficult for a user to drag the pin. I've tried increasing the frame size and also using a giant custom image. But nothing seems to actually change the hit area for the drag to be larger than default.
Consequently, I have to attempt to tap/drag about ten times before anything happens.
MKAnnotationView *annView = [[[MKAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:#"bluedot"] autorelease];
UIImage *image = [UIImage imageNamed:#"blue_dot.png"];
annView.image = image;
annView.draggable = YES;
annView.selected = YES;
return annView;
What am I missing here?
EDIT:
It turns out the problem is that MKAnnotationView needs to be touched before you can drag it. I was having trouble because there are a lot of pins nearby and my MKAnnotationView is very small.
I didn't realise MKAnnotationView needed to be touched before you can drag it.
To get around this, I created a timer that selected that MKAnnotationView regularly.
NSTimer *selectAnnotationTimer = [[NSTimer scheduledTimerWithTimeInterval:0.2 target:self selector:#selector(selectCenterAnnotation) userInfo:nil repeats:YES] retain];
and the method it calls:
- (void)selectCenterAnnotation {
[mapView selectAnnotation:centerAnnotation animated:NO];
}
Instead of using a timer, you could select the annotation on mouse down. This way you don't mess with the annotation selection, and don't have a timer for each annotation running all the time.
I had the same problem when developing a mac application, and after selecting the annotation on mouse down, the drag works great.
Just came here to say this still helped me today, multiple years later.
In my application, the user can place a bounding area down with annotations and (hopefully) move them around freely. This turned out to be a bit of a nightmare with this "select first and then move" behaviour and just plain made it difficult and infuriating.
To solve this, I default annotation views to selected
func mapView(_ mapView: MKMapView,
viewFor annotation: MKAnnotation) -> MKAnnotationView? {
let view = MKAnnotationView(annotation: annotation, reuseIdentifier: "annotation")
view.isDraggable = true
view.setSelected(true, animated: false)
return view
}
The problem is that there is an annotation manager that resets all annotations back to deselected, so I get around this in this delegate method
func mapView(_ mapView: MKMapView,
annotationView view: MKAnnotationView,
didChange newState: MKAnnotationView.DragState,
fromOldState oldState: MKAnnotationView.DragState) {
if newState == .ending {
mapView.annotations.forEach({
mapview.view(for: $0)?.setSelected(true, animated: false)
})
}
It's a stupid hack for a stupid problem. It does work though.
I have an array of UIImageViews. I want to apply a shadow to each of these images. I've used the code below:
- (void)awakeFromNib {
for (UIImageView *image in imagesJigsawPieces) {
image.layer.shadowColor = [UIColor blackColor].CGColor;
image.layer.shadowOffset = CGSizeMake(-1, -1);
image.layer.shadowOpacity = 1;
image.layer.shadowRadius = 5.0;
image.clipsToBounds = NO; //EDIT: I have also included this with no change
}
}
I have also included #import <QuartzCore/CALayer.h>.
I am not getting any errors but I am also not getting any shadows on my images.
Are you certain this code is being called? Have you placed a breakpoint in the for loop to verify?
-awakeFromNib is called only if you have a view (or whatever) in a nib file connected via IBOutlet to an ivar in your code. -awakefFromNib is called, in this case, instead of -initWithFrame: (or the like), an important distinction which I sometimes forget myself!