Testing intersection of rotated CGRects [duplicate] - objective-c

I don't know where to start with this one. Obviously CGRectIntersectsRect will not work in this case, and you'll see why.
I have a subclass of UIView that has a UIImageView inside it that is placed in the exact center of the UIView:
I then rotate the custom UIView to maintain the frame of the inner UIImageView while still being able to perform a CGAffineRotation. The resulting frame looks something like this:
I need to prevent users from making these UIImageViews intersect, but I have no idea how to check intersection between the two UIImageViews, since not only do their frames not apply to the parent UIView, but also, they are rotated without it affecting their frames.
The only results from my attempts have been unsuccessful.
Any ideas?

The following algorithm can be used to check if two (rotated or otherwise transformed) views overlap:
Use [view convertPoint:point toView:nil] to convert the 4 boundary points of both views
to a common coordinate system (the window coordinates).
The converted points form two convex quadrilaterals.
Use the SAT (Separating Axis Theorem) to check if the quadrilaterals intersect.
This: http://www.geometrictools.com/Documentation/MethodOfSeparatingAxes.pdf is another description of the algorithm containing pseudo-code, more can be found by googling for "Separating Axis Theorem".
Update: I have tried to create a Objective-C method for the "Separating Axis Theorem", and this is what I got. Up to now, I did only a few tests, so I hope that there are not too many errors.
- (BOOL)convexPolygon:(CGPoint *)poly1 count:(int)count1 intersectsWith:(CGPoint *)poly2 count:(int)count2;
tests if 2 convex polygons intersect. Both polygons are given as a CGPoint array of the vertices.
- (BOOL)view:(UIView *)view1 intersectsWith:(UIView *)view2
tests (as described above) if two arbitrary views intersect.
Implementation:
- (void)projectionOfPolygon:(CGPoint *)poly count:(int)count onto:(CGPoint)perp min:(CGFloat *)minp max:(CGFloat *)maxp
{
CGFloat minproj = MAXFLOAT;
CGFloat maxproj = -MAXFLOAT;
for (int j = 0; j < count; j++) {
CGFloat proj = poly[j].x * perp.x + poly[j].y * perp.y;
if (proj > maxproj)
maxproj = proj;
if (proj < minproj)
minproj = proj;
}
*minp = minproj;
*maxp = maxproj;
}
-(BOOL)convexPolygon:(CGPoint *)poly1 count:(int)count1 intersectsWith:(CGPoint *)poly2 count:(int)count2
{
for (int i = 0; i < count1; i++) {
// Perpendicular vector for one edge of poly1:
CGPoint p1 = poly1[i];
CGPoint p2 = poly1[(i+1) % count1];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
// Projection intervals of poly1, poly2 onto perpendicular vector:
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
// If projections do not overlap then we have a "separating axis"
// which means that the polygons do not intersect:
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// And now the other way around with edges from poly2:
for (int i = 0; i < count2; i++) {
CGPoint p1 = poly2[i];
CGPoint p2 = poly2[(i+1) % count2];
CGPoint perp = CGPointMake(- (p2.y - p1.y), p2.x - p1.x);
CGFloat minp1, maxp1, minp2, maxp2;
[self projectionOfPolygon:poly1 count:count1 onto:perp min:&minp1 max:&maxp1];
[self projectionOfPolygon:poly2 count:count1 onto:perp min:&minp2 max:&maxp2];
if (maxp1 < minp2 || maxp2 < minp1)
return NO;
}
// No separating axis found, then the polygons must intersect:
return YES;
}
- (BOOL)view:(UIView *)view1 intersectsWith:(UIView *)view2
{
CGPoint poly1[4];
CGRect bounds1 = view1.bounds;
poly1[0] = [view1 convertPoint:bounds1.origin toView:nil];
poly1[1] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y) toView:nil];
poly1[2] = [view1 convertPoint:CGPointMake(bounds1.origin.x + bounds1.size.width, bounds1.origin.y + bounds1.size.height) toView:nil];
poly1[3] = [view1 convertPoint:CGPointMake(bounds1.origin.x, bounds1.origin.y + bounds1.size.height) toView:nil];
CGPoint poly2[4];
CGRect bounds2 = view2.bounds;
poly2[0] = [view2 convertPoint:bounds2.origin toView:nil];
poly2[1] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y) toView:nil];
poly2[2] = [view2 convertPoint:CGPointMake(bounds2.origin.x + bounds2.size.width, bounds2.origin.y + bounds2.size.height) toView:nil];
poly2[3] = [view2 convertPoint:CGPointMake(bounds2.origin.x, bounds2.origin.y + bounds2.size.height) toView:nil];
return [self convexPolygon:poly1 count:4 intersectsWith:poly2 count:4];
}
Swift version. (Added this behaviour to UIView via an extension)
extension UIView {
func projection(of polygon: [CGPoint], perpendicularVector: CGPoint) -> (CGFloat, CGFloat) {
var minproj = CGFloat.greatestFiniteMagnitude
var maxproj = -CGFloat.greatestFiniteMagnitude
for j in 0..<polygon.count {
let proj = polygon[j].x * perpendicularVector.x + polygon[j].y * perpendicularVector.y
if proj > maxproj {
maxproj = proj
}
if proj < minproj {
minproj = proj
}
}
return (minproj, maxproj)
}
func convex(polygon: [CGPoint], intersectsWith polygon2: [CGPoint]) -> Bool {
//
let count1 = polygon.count
for i in 0..<count1 {
let p1 = polygon[i]
let p2 = polygon[(i+1) % count1]
let perpendicularVector = CGPoint(x: -(p2.y - p1.y), y: p2.x - p1.x)
let m1 = projection(of: polygon, perpendicularVector: perpendicularVector)
let minp1 = m1.0
let maxp1 = m1.1
let m2 = projection(of: polygon2, perpendicularVector: perpendicularVector)
let minp2 = m2.0
let maxp2 = m2.1
if maxp1 < minp2 || maxp2 < minp1 {
return false
}
}
//
let count2 = polygon2.count
for i in 0..<count2 {
let p1 = polygon2[i]
let p2 = polygon2[(i+1) % count2]
let perpendicularVector = CGPoint(x: -(p2.y - p1.y), y: p2.x - p1.x)
let m1 = projection(of: polygon, perpendicularVector: perpendicularVector)
let minp1 = m1.0
let maxp1 = m1.1
let m2 = projection(of: polygon2, perpendicularVector: perpendicularVector)
let minp2 = m2.0
let maxp2 = m1.0
if maxp1 < minp2 || maxp2 < minp1 {
return false
}
}
//
return true
}
func intersects(with someView: UIView) -> Bool {
//
var points1 = [CGPoint]()
let bounds1 = bounds
let p11 = convert(bounds1.origin, to: nil)
let p21 = convert(CGPoint(x: bounds1.origin.x + bounds1.size.width, y: bounds1.origin.y), to: nil)
let p31 = convert(CGPoint(x: bounds1.origin.x + bounds1.size.width, y: bounds1.origin.y + bounds1.size.height) , to: nil)
let p41 = convert(CGPoint(x: bounds1.origin.x, y: bounds1.origin.y + bounds1.size.height), to: nil)
points1.append(p11)
points1.append(p21)
points1.append(p31)
points1.append(p41)
//
var points2 = [CGPoint]()
let bounds2 = someView.bounds
let p12 = someView.convert(bounds2.origin, to: nil)
let p22 = someView.convert(CGPoint(x: bounds2.origin.x + bounds2.size.width, y: bounds2.origin.y), to: nil)
let p32 = someView.convert(CGPoint(x: bounds2.origin.x + bounds2.size.width, y: bounds2.origin.y + bounds2.size.height) , to: nil)
let p42 = someView.convert(CGPoint(x: bounds2.origin.x, y: bounds2.origin.y + bounds2.size.height), to: nil)
points2.append(p12)
points2.append(p22)
points2.append(p32)
points2.append(p42)
//
return convex(polygon: points1, intersectsWith: points2)
}

Related

Creating parallax focus effect on UICollectionViewCell

How do you create the parallax focus effect on a collection view cell with a custom view? If I were using an image view the property to set would be adjustsImageWhenAncestorFocused but my collection view cell contains a subclassed UIView with custom content drawn using core graphics.
The answer by #raulriera is nice, but only shifts the cell around in 2D.
Also, the OP asked for an objective-C example.
I was also looking to do this effect for the exact same reason - I had UICollectionView with cells containing images and labels.
I created a UIMotionEffectGroup subclass, since getting near to the Apple TV effect seems to require four different motion effects. The first two are the flat movements as in #raulriera, and the other two are the 3D rotations.
Just the shiny environment layer to go now. Any takers? :-)
Here is my code for the motion effect group:
(The shiftDistance and tiltAngle constants set the magnitude of the effect. The given values look pretty similar to the Apple TV effect.)
#import <UIKit/UIKit.h>
#import "UIAppleTvMotionEffectGroup.h"
#implementation UIAppleTvMotionEffectGroup
- (id)init
{
if ((self = [super init]) != nil)
{
// Size of shift movements
CGFloat const shiftDistance = 10.0f;
// Make horizontal movements shift the centre left and right
UIInterpolatingMotionEffect *xShift = [[UIInterpolatingMotionEffect alloc]
initWithKeyPath:#"center.x"
type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];
xShift.minimumRelativeValue = [NSNumber numberWithFloat: shiftDistance * -1.0f];
xShift.maximumRelativeValue = [NSNumber numberWithFloat: shiftDistance];
// Make vertical movements shift the centre up and down
UIInterpolatingMotionEffect *yShift = [[UIInterpolatingMotionEffect alloc]
initWithKeyPath:#"center.y"
type:UIInterpolatingMotionEffectTypeTiltAlongVerticalAxis];
yShift.minimumRelativeValue = [NSNumber numberWithFloat: shiftDistance * -1.0f];
yShift.maximumRelativeValue = [NSNumber numberWithFloat: shiftDistance];
// Size of tilt movements
CGFloat const tiltAngle = M_PI_4 * 0.125;
// Now make horizontal movements effect a rotation about the Y axis for side-to-side rotation.
UIInterpolatingMotionEffect *xTilt = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:#"layer.transform" type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];
// CATransform3D value for minimumRelativeValue
CATransform3D transMinimumTiltAboutY = CATransform3DIdentity;
transMinimumTiltAboutY.m34 = 1.0 / 500;
transMinimumTiltAboutY = CATransform3DRotate(transMinimumTiltAboutY, tiltAngle * -1.0, 0, 1, 0);
// CATransform3D value for maximumRelativeValue
CATransform3D transMaximumTiltAboutY = CATransform3DIdentity;
transMaximumTiltAboutY.m34 = 1.0 / 500;
transMaximumTiltAboutY = CATransform3DRotate(transMaximumTiltAboutY, tiltAngle, 0, 1, 0);
// Set the transform property boundaries for the interpolation
xTilt.minimumRelativeValue = [NSValue valueWithCATransform3D: transMinimumTiltAboutY];
xTilt.maximumRelativeValue = [NSValue valueWithCATransform3D: transMaximumTiltAboutY];
// Now make vertical movements effect a rotation about the X axis for up and down rotation.
UIInterpolatingMotionEffect *yTilt = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:#"layer.transform" type:UIInterpolatingMotionEffectTypeTiltAlongVerticalAxis];
// CATransform3D value for minimumRelativeValue
CATransform3D transMinimumTiltAboutX = CATransform3DIdentity;
transMinimumTiltAboutX.m34 = 1.0 / 500;
transMinimumTiltAboutX = CATransform3DRotate(transMinimumTiltAboutX, tiltAngle * -1.0, 1, 0, 0);
// CATransform3D value for maximumRelativeValue
CATransform3D transMaximumTiltAboutX = CATransform3DIdentity;
transMaximumTiltAboutX.m34 = 1.0 / 500;
transMaximumTiltAboutX = CATransform3DRotate(transMaximumTiltAboutX, tiltAngle, 1, 0, 0);
// Set the transform property boundaries for the interpolation
yTilt.minimumRelativeValue = [NSValue valueWithCATransform3D: transMinimumTiltAboutX];
yTilt.maximumRelativeValue = [NSValue valueWithCATransform3D: transMaximumTiltAboutX];
// Add all of the motion effects to this group
self.motionEffects = #[xShift, yShift, xTilt, yTilt];
[xShift release];
[yShift release];
[xTilt release];
[yTilt release];
}
return self;
}
#end
I used it like this in my custom UICollectionViewCell subclass:
#implementation MyCollectionViewCell
- (void)didUpdateFocusInContext:(UIFocusUpdateContext *)context withAnimationCoordinator:(UIFocusAnimationCoordinator *)coordinator
{
// Create a static instance of the motion effect group (could do this anywhere, really, maybe init would be better - we only need one of them.)
static UIAppleTVMotionEffectGroup *s_atvMotionEffect = nil;
if (s_atvMotionEffect == nil)
{
s_atvMotionEffect = [[UIAppleTVMotionEffectGroup alloc] init];
}
[coordinator addCoordinatedAnimations: ^{
if (self.focused)
{
[self addMotionEffect: s_atvMotionEffect];
}
else
{
[self removeMotionEffect: s_atvMotionEffect];
}
completion: ^{
}];
}
#end
All you need to do is add a UIMotionEffect to your subviews. Something like this
override func didUpdateFocusInContext(context: UIFocusUpdateContext, withAnimationCoordinator coordinator: UIFocusAnimationCoordinator) {
coordinator.addCoordinatedAnimations({ [unowned self] in
if self.focused {
let verticalMotionEffect = UIInterpolatingMotionEffect(keyPath: "center.y", type: .TiltAlongVerticalAxis)
verticalMotionEffect.minimumRelativeValue = -10
verticalMotionEffect.maximumRelativeValue = 10
let horizontalMotionEffect = UIInterpolatingMotionEffect(keyPath: "center.x", type: .TiltAlongHorizontalAxis)
horizontalMotionEffect.minimumRelativeValue = -10
horizontalMotionEffect.maximumRelativeValue = 10
let motionEffectGroup = UIMotionEffectGroup()
motionEffectGroup.motionEffects = [horizontalMotionEffect, verticalMotionEffect]
yourView.addMotionEffect(motionEffectGroup)
}
else {
// Remove the effect here
}
}, completion: nil)
}
I've converted Simon Tillson's answer to swift 3.0 and posted here to save typing for people in the future. Thanks very much for a great solution.
class UIAppleTVMotionEffectGroup : UIMotionEffectGroup{
// size of shift movements
let shiftDistance : CGFloat = 10.0
let tiltAngle : CGFloat = CGFloat(M_PI_4) * 0.125
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
override init() {
super.init()
// Make horizontal movements shift the centre left and right
let xShift = UIInterpolatingMotionEffect(keyPath: "center.x", type: UIInterpolatingMotionEffectType.tiltAlongHorizontalAxis)
xShift.minimumRelativeValue = shiftDistance * -1.0
xShift.maximumRelativeValue = shiftDistance
let yShift = UIInterpolatingMotionEffect(keyPath: "center.y", type: UIInterpolatingMotionEffectType.tiltAlongVerticalAxis)
yShift.minimumRelativeValue = 0.0-shiftDistance
yShift.maximumRelativeValue = shiftDistance
let xTilt = UIInterpolatingMotionEffect(keyPath: "layer.transform", type: UIInterpolatingMotionEffectType.tiltAlongHorizontalAxis)
var transMinimumTiltAboutY = CATransform3DIdentity
transMinimumTiltAboutY.m34 = 1.0 / 500.0
transMinimumTiltAboutY = CATransform3DRotate(transMinimumTiltAboutY, tiltAngle * -1.0, 0, 1, 0)
var transMaximumTiltAboutY = CATransform3DIdentity
transMaximumTiltAboutY.m34 = 1.0 / 500.0
transMaximumTiltAboutY = CATransform3DRotate(transMaximumTiltAboutY, tiltAngle , 0, 1, 0)
xTilt.minimumRelativeValue = transMinimumTiltAboutY
xTilt.maximumRelativeValue = transMaximumTiltAboutY
let yTilt = UIInterpolatingMotionEffect(keyPath: "layer.transform", type: UIInterpolatingMotionEffectType.tiltAlongVerticalAxis)
var transMinimumTiltAboutX = CATransform3DIdentity
transMinimumTiltAboutX.m34 = 1.0 / 500.0
transMinimumTiltAboutX = CATransform3DRotate(transMinimumTiltAboutX, tiltAngle * -1.0, 1, 0, 0)
var transMaximumTiltAboutX = CATransform3DIdentity
transMaximumTiltAboutX.m34 = 1.0 / 500.0
transMaximumTiltAboutX = CATransform3DRotate(transMaximumTiltAboutX, tiltAngle , 1, 0, 0)
yTilt.minimumRelativeValue = transMinimumTiltAboutX
yTilt.maximumRelativeValue = transMaximumTiltAboutX
self.motionEffects = [xShift,yShift,xTilt,yTilt]
}
}
I have added a little pop to the part in the UICollectionView subclass. Note the struct wrapper for the static variable
override func didUpdateFocus(in context: UIFocusUpdateContext, with coordinator: UIFocusAnimationCoordinator) {
struct wrapper {
static let s_atvMotionEffect = UIAppleTVMotionEffectGroup()
}
coordinator.addCoordinatedAnimations( {
var scale : CGFloat = 0.0
if self.isFocused {
self.addMotionEffect(wrapper.s_atvMotionEffect)
scale = 1.2
} else {
self.removeMotionEffect(wrapper.s_atvMotionEffect)
scale = 1.0
}
let transform = CGAffineTransform(scaleX: scale, y: scale)
self.layer.setAffineTransform(transform)
},completion: nil)
}

Limit the Pan Gesture boundary of a UIImageView inside in a UIView

I'm using the below code
- (void)panGestureDetected:(UIPanGestureRecognizer *)recognizer
{
static CGRect originalFrame;
if (state == UIGestureRecognizerStateBegan)
{
originalFrame = recognizer.view.frame;
}
else if (state == UIGestureRecognizerStateChanged)
{
CGPoint translate = [recognizer translationInView:recognizer.view.superview];
CGRect newFrame = CGRectMake(fmin(recognizer.view.superview.frame.size.width - originalFrame.size.width, fmax(originalFrame.origin.x + translate.x,0.0)),
fmin(recognizer.view.superview.size.height - originalFrame.size.height, fmax(originalFrame.origin.y + translate.y, 0.0)),
originalFrame.size.width,
originalFrame.size.height);
recognizer.view.frame = newFrame;
}else if (state == UIGestureRecognizerStateEnded)
{
[self performSelectorInBackground:#selector(ChangeViewAlphaTo1) withObject:nil];
}
}
The main problem is that you are trying to solve two constraints in one line.
Try to break things down when you have an issue. Massive statements are hard to read and even harder to debug.
CGRect newFrame = CGRectMake(fmin(recognizer.view.superview.frame.size.width - originalFrame.size.width, fmax(originalFrame.origin.x + translate.x,0.0)),
fmin(recognizer.view.superview.size.height - originalFrame.size.height, fmax(originalFrame.origin.y + translate.y, 0.0)),
originalFrame.size.width,
originalFrame.size.height);
is barely readable and when you revisit this in 6 months time you or another coder will not be able to tell what you were trying to do.
try replacing that monster with…
[self translateView:recognizer.view limitedBySuperViewBounds:translate];
all of a sudden the method tells you what you are doing.
-(void)translateView:(UIView *)aview limitedBySuperViewBounds:(CGPoint)translate
{
CGRect frameToTranslate = aview.frame;
CGRect superviewBounds= aview.superview.bounds;
frameToTranslate.origin.x = frameToTranslate.origin.x + translate.x;
frameToTranslate.origin.y = frameToTranslate.origin.y + translate.y;
if (translate.x < 0) {
frameToTranslate.origin.x = 0;
}
else if (CGRectGetMaxX(frameToTranslate) > CGRectGetWidth(superviewBounds)){
frameToTranslate.origin.x = CGRectGetWidth(superviewBounds) - CGRectGetWidth(frameToTranslate);
}
if (translate.y < 0) {
frameToTranslate.origin.y = 0;
}
else if (CGRectGetMaxY(frameToTranslate) > CGRectGetHeight(superviewBounds)){
frameToTranslate.origin.y = CGRectGetHeight(superviewBounds) - CGRectGetHeight(frameToTranslate);
}
aview.frame = frameToTranslate;
}
This isn't just my opinion , this is Clean Code

How can I restrict the scrolling of a UIImageView to the bounds of the UIImage like in iPhoto?

Here is an image in iPhoto:
Here is it zoomed in to the top left corner in iPhoto:
Here is the same image in my app:
Here it is zoomed in to the top left corner in my app:
How can I lose the excess grey space surrounding my image and restrict the scrolling to the bounds of the UIImage like iPhoto?
Thanks
You could use a library that does that. Try MWPhotoBrowser.
So, aside from using a 3rd party library, I solved this problem using:
iOS. How do I restrict UIScrollview scrolling to a limited extent? , adapting the answer to the following methods, which are I hope self explanatory:
- (CGRect) methodThatGetsImageSizeOnScreen
{
float frameHeight;
float frameWidth;
float frameXOrigin;
float frameYOrigin;
float threshold;
BOOL thisImageTouchesLeftAndRight;
UIInterfaceOrientation thisOrientation = self.interfaceOrientation;
if (UIInterfaceOrientationIsLandscape(thisOrientation)){
threshold = 748.0/1024.0;
if ((self.imageToPresent.size.height == self.imageToPresent.size.width) | ((self.imageToPresent.size.height/self.imageToPresent.size.width) > threshold)){
thisImageTouchesLeftAndRight = NO;
frameWidth = (748/self.imageToPresent.size.height)*self.imageToPresent.size.width;
frameHeight = 748;
frameXOrigin = (1024-frameWidth)/2;
frameYOrigin = 0;
}
else
{
thisImageTouchesLeftAndRight = YES;
frameWidth = 1024;
frameHeight = (1024/self.imageToPresent.size.width)*self.imageToPresent.size.height;
frameXOrigin = 0;
frameYOrigin = (748-frameHeight)/2;
}
}
else {
threshold = 768.0/1004.0;
if ((self.imageToPresent.size.height == self.imageToPresent.size.width) | ((self.imageToPresent.size.width/self.imageToPresent.size.height) > threshold)){
thisImageTouchesLeftAndRight = YES;
frameWidth = 768;
frameHeight = (768/self.imageToPresent.size.width)*self.imageToPresent.size.height;
frameXOrigin = 0;
frameYOrigin = (1004-frameHeight)/2;
}
else
{
thisImageTouchesLeftAndRight = NO;
frameWidth = (1004/self.imageToPresent.size.height)*self.imageToPresent.size.width;
frameHeight = 1004;
frameXOrigin = (768-frameWidth)/2;
frameYOrigin = 0;
}
}
CGRect theRect = CGRectMake(frameXOrigin, frameYOrigin, frameWidth, frameHeight);
return theRect;
}
#pragma mark - UIScrollViewDelegate
- (void) scrollViewDidScroll:(UIScrollView*)scroll{
UIInterfaceOrientation thisOrientation = self.interfaceOrientation;
float largeDimension;
float smallDimension;
if (UIInterfaceOrientationIsLandscape(thisOrientation)){
largeDimension = 1024;
smallDimension = 748;
}
else{
largeDimension = 1004;
smallDimension = 768;
}
CGPoint offset = scroll.contentOffset;
CGRect results = [self methodThatGetsImageSizeOnScreen];
float frameHeight = results.size.height;
float frameYOrigin = results.origin.y;
float frameWidth = results.size.width;
float frameXOrigin = results.origin.x;
//So, we start the limiting of a landscape image in portrait (in the y direction) when we exceed the following criteria:
if((frameHeight*self.scrollView.zoomScale) > largeDimension){
if(offset.y < self.scrollView.zoomScale*frameYOrigin) offset.y = self.scrollView.zoomScale*frameYOrigin;
if(offset.y > ((self.scrollView.zoomScale*frameYOrigin)+(frameHeight*self.scrollView.zoomScale)-largeDimension)) offset.y = ((self.scrollView.zoomScale*frameYOrigin)+(frameHeight*self.scrollView.zoomScale)-largeDimension);
}
if((frameWidth*self.scrollView.zoomScale) > largeDimension){
if(offset.x < self.scrollView.zoomScale*frameXOrigin) offset.x = self.scrollView.zoomScale*frameXOrigin;
if(offset.x > ((self.scrollView.zoomScale*frameXOrigin)+(frameWidth*self.scrollView.zoomScale)-largeDimension)) offset.x = ((self.scrollView.zoomScale*frameXOrigin)+(frameWidth*self.scrollView.zoomScale)-largeDimension);
}
// Set offset to adjusted value
scroll.contentOffset = offset;
//Remember you may want your minimum zoomScale set in viewDidLoad or viewWillAppear
}

How to get video frame of the AVPlayer?

I have PlayerView class for displaying AVPlayer's playback. Code from documentation.
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#interface PlayerView : UIView
#property (nonatomic) AVPlayer *player;
#end
#implementation PlayerView
+ (Class)layerClass {
return [AVPlayerLayer class];
}
- (AVPlayer*)player {
return [(AVPlayerLayer *)[self layer] player];
}
- (void)setPlayer:(AVPlayer *)player {
[(AVPlayerLayer *)[self layer] setPlayer:player];
}
#end
I set up my AVPlayer (contains video asset with size 320x240) in this PlayerView (with frame.size.width = 100, frame.size.height = 100) and my video is resized. How can i get size of video after adding in PlayerView?
In iOS 7.0 added new feature:
AVPlayerLayer has property videoRect.
This worked for me. When you don't have the AVPLayerLayer.
- (CGRect)videoRect {
// #see http://stackoverflow.com/a/6565988/1545158
AVAssetTrack *track = [[self.player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
if (!track) {
return CGRectZero;
}
CGSize trackSize = [track naturalSize];
CGSize videoViewSize = self.videoView.bounds.size;
CGFloat trackRatio = trackSize.width / trackSize.height;
CGFloat videoViewRatio = videoViewSize.width / videoViewSize.height;
CGSize newSize;
if (videoViewRatio > trackRatio) {
newSize = CGSizeMake(trackSize.width * videoViewSize.height / trackSize.height, videoViewSize.height);
} else {
newSize = CGSizeMake(videoViewSize.width, trackSize.height * videoViewSize.width / trackSize.width);
}
CGFloat newX = (videoViewSize.width - newSize.width) / 2;
CGFloat newY = (videoViewSize.height - newSize.height) / 2;
return CGRectMake(newX, newY, newSize.width, newSize.height);
}
Found a solution:
Add to PlayerView class:
- (CGRect)videoContentFrame {
AVPlayerLayer *avLayer = (AVPlayerLayer *)[self layer];
// AVPlayerLayerContentLayer
CALayer *layer = (CALayer *)[[avLayer sublayers] objectAtIndex:0];
CGRect transformedBounds = CGRectApplyAffineTransform(layer.bounds, CATransform3DGetAffineTransform(layer.sublayerTransform));
return transformedBounds;
}
Here's the solution that's working for me, takes into account the positioning of the AVPlayer within the view. I just added this to the PlayerView custom class. I had to solve this because doesn't appear that videoRect is working in 10.7.
- (NSRect) videoRect {
NSRect theVideoRect = NSMakeRect(0,0,0,0);
NSRect theLayerRect = self.playerLayer.frame;
NSSize theNaturalSize = NSSizeFromCGSize([[[self.movie asset] tracksWithMediaType:AVMediaTypeVideo][0] naturalSize]);
float movieAspectRatio = theNaturalSize.width/theNaturalSize.height;
float viewAspectRatio = theLayerRect.size.width/theLayerRect.size.height;
if (viewAspectRatio < movieAspectRatio) {
theVideoRect.size.width = theLayerRect.size.width;
theVideoRect.size.height = theLayerRect.size.width/movieAspectRatio;
theVideoRect.origin.x = 0;
theVideoRect.origin.y = (theLayerRect.size.height/2) - (theVideoRect.size.height/2);
}
else if (viewAspectRatio > movieAspectRatio) {
theVideoRect.size.width = movieAspectRatio * theLayerRect.size.height;
theVideoRect.size.height = theLayerRect.size.height;
theVideoRect.origin.x = (theLayerRect.size.width/2) - (theVideoRect.size.width/2);
theVideoRect.origin.y = 0;
}
return theVideoRect;
}
Here is Eric Badros' answer ported to iOS. I also added preferredTransform handling. This assumes _player is an AVPlayer
- (CGRect) videoRect {
CGRect theVideoRect = CGRectZero;
// Replace this with whatever frame your AVPlayer is playing inside of:
CGRect theLayerRect = self.playerLayer.frame;
AVAssetTrack *track = [_player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo][0];
CGSize theNaturalSize = [track naturalSize];
theNaturalSize = CGSizeApplyAffineTransform(theNaturalSize, track.preferredTransform);
theNaturalSize.width = fabs(theNaturalSize.width);
theNaturalSize.height = fabs(theNaturalSize.height);
CGFloat movieAspectRatio = theNaturalSize.width / theNaturalSize.height;
CGFloat viewAspectRatio = theLayerRect.size.width / theLayerRect.size.height;
// Note change this *greater than* to a *less than* if your video will play in aspect fit mode (as opposed to aspect fill mode)
if (viewAspectRatio > movieAspectRatio) {
theVideoRect.size.width = theLayerRect.size.width;
theVideoRect.size.height = theLayerRect.size.width / movieAspectRatio;
theVideoRect.origin.x = 0;
theVideoRect.origin.y = (theLayerRect.size.height - theVideoRect.size.height) / 2;
} else if (viewAspectRatio < movieAspectRatio) {
theVideoRect.size.width = movieAspectRatio * theLayerRect.size.height;
theVideoRect.size.height = theLayerRect.size.height;
theVideoRect.origin.x = (theLayerRect.size.width - theVideoRect.size.width) / 2;
theVideoRect.origin.y = 0;
}
return theVideoRect;
}
Here is a Swift solution based on Andrey Banshchikov's answer. His solution is especially useful when you don't have access to AVPLayerLayer.
func currentVideoFrameSize(playerView: AVPlayerView, player: AVPlayer) -> CGSize {
// See https://stackoverflow.com/a/40164496/1877617
let track = player.currentItem?.asset.tracks(withMediaType: .video).first
if let track = track {
let trackSize = track.naturalSize
let videoViewSize = playerView.bounds.size
let trackRatio = trackSize.width / trackSize.height
let videoViewRatio = videoViewSize.width / videoViewSize.height
var newSize: CGSize
if videoViewRatio > trackRatio {
newSize = CGSize(width: trackSize.width * videoViewSize.height / trackSize.height, height: videoViewSize.height)
} else {
newSize = CGSize(width: videoViewSize.width, height: trackSize.height * videoViewSize.width / trackSize.width)
}
return newSize
}
return CGSize.zero
}
Andrey's answer worked for file/local/downloaded video, but didn't work for streamed HLS video. As a 2nd attempt, I was able to find the video track right inside of the "tracks" property of currentItem. Also rewritten to Swift.
private func videoRect() -> CGRect {
// Based on https://stackoverflow.com/a/40164496 - originally objective-c
var trackTop: AVAssetTrack? = nil
if let track1 = self.player?.currentItem?.asset.tracks(withMediaType: AVMediaType.video).first {
trackTop = track1
}
else {
// For some reason the above way wouldn't find the "track" for streamed HLS video.
// This seems to work for streamed HLS.
if let tracks = self.player?.currentItem?.tracks {
for avplayeritemtrack in tracks {
if let assettrack = avplayeritemtrack.assetTrack {
if assettrack.mediaType == .video {
// Found an assetTrack here?
trackTop = assettrack
break
}
}
}
}
}
guard let track = trackTop else {
print("Failed getting track")
return CGRect.zero
}
let trackSize = track.naturalSize
let videoViewSize = self.view.bounds.size
let trackRatio = trackSize.width / trackSize.height
let videoViewRatio = videoViewSize.width / videoViewSize.height
let newSize: CGSize
if videoViewRatio > trackRatio {
newSize = CGSize.init(width: trackSize.width * videoViewSize.height / trackSize.height, height: videoViewSize.height)
}
else {
newSize = CGSize.init(width: videoViewSize.width, height: trackSize.height * videoViewSize.width / trackSize.width)
}
let newX: CGFloat = (videoViewSize.width - newSize.width) / 2
let newY: CGFloat = (videoViewSize.height - newSize.height) / 2
return CGRect.init(x: newX, y: newY, width: newSize.width, height: newSize.height)
}
Much simpler approach
I found another way. Seems to be much easier, if you're using AVPlayerViewController. Why didn't I find this earlier. 🀧
return self.videoBounds
2022 SwiftUI
struct PlayerViewController: UIViewControllerRepresentable {
private let avController = AVPlayerViewController()
var videoURL: URL?
private var player: AVPlayer {
return AVPlayer(url: videoURL!)
}
// πŸ‘‡πŸΌπŸ‘‡πŸΌπŸ‘‡πŸΌ HERE πŸ‘‡πŸΌπŸ‘‡πŸΌπŸ‘‡πŸΌ
func getVideoFrame() -> CGRect {
self.avController.videoBounds
}
// πŸ‘†πŸΌπŸ‘†πŸΌπŸ‘†πŸΌ HERE πŸ‘†πŸΌπŸ‘†πŸΌπŸ‘†πŸΌ
func makeUIViewController(context: Context) -> AVPlayerViewController {
avController.modalPresentationStyle = .fullScreen
avController.player = player
avController.player?.play()
return avController
}
func updateUIViewController(_ playerController: AVPlayerViewController, context: Context) {}
}

schedule update issue

This might sound pretty straightforward. I've created a method and I've called it as below in the init method.
[self createNewSpr:ccp(s.width * 0.25,s.height-200)];
[self createNewSpr:ccp(s.width * 0.50,s.height-200)];
[self createNewSpr:ccp(s.width * 0.75,s.height-200)];
[self scheduleUpdate];
I've defined a for loop in my update method that imposes a gravity higher than that of the world on the sprites. Only the last call is affected by the new gravity but the first and second act on the world gravity. I am not sure what is wrong but I suspect it to be the scheduleUpdate. Please Help.
Edit: Update Method :
-(void) update: (ccTime) dt
{
int32 velocityIterations = 8;
int32 positionIterations = 1;
world->Step(dt, velocityIterations, positionIterations);
for (b2Body* b = world->GetBodyList(); b; b = b->GetNext())
{
if (b == sprite)
{
b->ApplyForce( b2Vec2(0.0,20*b->GetMass()),b->GetWorldCenter());
}
}
}
the createNewSpr:
-(void) createNewSpr:(CGPoint)pos {
//CGSize s = [CCDirector sharedDirector].winSize;
b2Vec2 startPos = [self toMeters:pos];
CGFloat linkHeight = 0.24;
CGFloat linkWidth = 0.1;
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.position = startPos;
b2FixtureDef fixtureDef;
fixtureDef.density = 0.1;
b2PolygonShape polygonShape;
polygonShape.SetAsBox(linkWidth,linkHeight);
fixtureDef.shape = &polygonShape;
//first
b2Body* link = world->CreateBody( &bodyDef );
link->CreateFixture( &fixtureDef );
PhysicsSprite* segmentSprite = [PhysicsSprite spriteWithFile:#"sg.png"];
[self addChild:segmentSprite];
[segmentSprite setPhysicsBody:link];
b2RevoluteJointDef revoluteJointDef;
revoluteJointDef.localAnchorA.Set( 0, linkHeight);
revoluteJointDef.localAnchorB.Set( 0, -linkHeight);
for (int i = 0; i < 10; i++) {
b2Body* newLink = world->CreateBody( &bodyDef );
newLink->CreateFixture( &fixtureDef );
PhysicsSprite* segmentSprite = [PhysicsSprite spriteWithFile:#"sg.png"];
[self addChild:segmentSprite];
[segmentSprite setPhysicsBody:link];
revoluteJointDef.bodyA = link;
revoluteJointDef.bodyB = newLink;
world->CreateJoint( &revoluteJointDef );
link = newLink;//next iteration
}
PhysicsSprite* circleBodySprite = [PhysicsSprite spriteWithFile:#"cb.png"];
[self addChild:circleBodySprite z:1];
b2CircleShape circleShape;
circleShape.m_radius = circleBodySprite.contentSize.width/2 / PTM_RATIO;
fixtureDef.shape = &circleShape;
b2Body* chainBase =world->CreateBody( &bodyDef );
chainBase->CreateFixture( &fixtureDef );
[circleBodySprite setPhysicsBody:chainBase];
sprite = chainBase;
revoluteJointDef.bodyA = link;
revoluteJointDef.bodyB = chainBase;
revoluteJointDef.localAnchorA.Set(0,linkWidth);
revoluteJointDef.localAnchorB.Set(0,linkWidth);
world->CreateJoint( &revoluteJointDef );
}
The problem is with createNewSpr method...
You have assigned sprite to a body... So When you call this method three times, sprite refers to 3rd object only..
When you compare in update method, it just puts gravity on 3rd object...
Hope this helps.. :)