I'm learning Swift. As a test, I'm translating some of my old Objective-C programs to swift. But I have a crazy error: In Objective-C I have the following code:
- (CGSize)makeSizeFromCentimetersWidth: (CGFloat)width andY: (CGFloat)height {
NSScreen *screen = [NSScreen mainScreen];
NSDictionary *description = [screen deviceDescription];
NSSize displayPixelSize = [[description objectForKey:NSDeviceSize] sizeValue];
CGSize displayPhysicalSize = CGDisplayScreenSize([[description objectForKey:#"NSScreenNumber"] unsignedIntValue]);
CGFloat resolution = (displayPixelSize.width / displayPhysicalSize.width) * 25.4f;
CGFloat pixelsWidth = 0.394 * width * resolution;
CGFloat pixelsHeight = 0.394 * height * resolution;
return CGSizeMake(pixelsWidth, pixelsHeight);
}
In swift I have translated to this:
func makeSizeFromCentimeters(width: CGFloat, height: CGFloat) -> CGSize {
var screen: NSScreen = NSScreen.mainScreen()!
var description: NSDictionary = screen.deviceDescription
var displayPixelSize: NSSize = description.objectForKey(NSDeviceSize)!.sizeValue
var displayPhysicalSize: CGSize = CGDisplayScreenSize(description.objectForKey("NSScreenNumber")!.unsignedIntValue)
var resolution = (displayPixelSize.width / displayPhysicalSize.width) * 25.4
var pixelsWidth: CGFloat = 0.394 * width * resolution
var pixelsHeight: CGFloat = 0.394 * height * resolution
return CGSizeMake(pixelsWidth, pixelsHeight)
}
In Objective-C the code does what it should: Calculate a size from centimeters to pixels, to give out (in my case) an NSImageView with exactly the size of the given centimeters. But in Swift, the returned size, is always 0:
NSLog("%f", makeSizeFromCentimeters(2, height: 2).width)
NSLog("%f", makeSizeFromCentimeters(2, height: 2).height)
Is there an translating error? Which variable is 0? (No idea why it should be 0 if it's not caused by a variable).
Thank you for your help!
Related
How do you create the parallax focus effect on a collection view cell with a custom view? If I were using an image view the property to set would be adjustsImageWhenAncestorFocused but my collection view cell contains a subclassed UIView with custom content drawn using core graphics.
The answer by #raulriera is nice, but only shifts the cell around in 2D.
Also, the OP asked for an objective-C example.
I was also looking to do this effect for the exact same reason - I had UICollectionView with cells containing images and labels.
I created a UIMotionEffectGroup subclass, since getting near to the Apple TV effect seems to require four different motion effects. The first two are the flat movements as in #raulriera, and the other two are the 3D rotations.
Just the shiny environment layer to go now. Any takers? :-)
Here is my code for the motion effect group:
(The shiftDistance and tiltAngle constants set the magnitude of the effect. The given values look pretty similar to the Apple TV effect.)
#import <UIKit/UIKit.h>
#import "UIAppleTvMotionEffectGroup.h"
#implementation UIAppleTvMotionEffectGroup
- (id)init
{
if ((self = [super init]) != nil)
{
// Size of shift movements
CGFloat const shiftDistance = 10.0f;
// Make horizontal movements shift the centre left and right
UIInterpolatingMotionEffect *xShift = [[UIInterpolatingMotionEffect alloc]
initWithKeyPath:#"center.x"
type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];
xShift.minimumRelativeValue = [NSNumber numberWithFloat: shiftDistance * -1.0f];
xShift.maximumRelativeValue = [NSNumber numberWithFloat: shiftDistance];
// Make vertical movements shift the centre up and down
UIInterpolatingMotionEffect *yShift = [[UIInterpolatingMotionEffect alloc]
initWithKeyPath:#"center.y"
type:UIInterpolatingMotionEffectTypeTiltAlongVerticalAxis];
yShift.minimumRelativeValue = [NSNumber numberWithFloat: shiftDistance * -1.0f];
yShift.maximumRelativeValue = [NSNumber numberWithFloat: shiftDistance];
// Size of tilt movements
CGFloat const tiltAngle = M_PI_4 * 0.125;
// Now make horizontal movements effect a rotation about the Y axis for side-to-side rotation.
UIInterpolatingMotionEffect *xTilt = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:#"layer.transform" type:UIInterpolatingMotionEffectTypeTiltAlongHorizontalAxis];
// CATransform3D value for minimumRelativeValue
CATransform3D transMinimumTiltAboutY = CATransform3DIdentity;
transMinimumTiltAboutY.m34 = 1.0 / 500;
transMinimumTiltAboutY = CATransform3DRotate(transMinimumTiltAboutY, tiltAngle * -1.0, 0, 1, 0);
// CATransform3D value for maximumRelativeValue
CATransform3D transMaximumTiltAboutY = CATransform3DIdentity;
transMaximumTiltAboutY.m34 = 1.0 / 500;
transMaximumTiltAboutY = CATransform3DRotate(transMaximumTiltAboutY, tiltAngle, 0, 1, 0);
// Set the transform property boundaries for the interpolation
xTilt.minimumRelativeValue = [NSValue valueWithCATransform3D: transMinimumTiltAboutY];
xTilt.maximumRelativeValue = [NSValue valueWithCATransform3D: transMaximumTiltAboutY];
// Now make vertical movements effect a rotation about the X axis for up and down rotation.
UIInterpolatingMotionEffect *yTilt = [[UIInterpolatingMotionEffect alloc] initWithKeyPath:#"layer.transform" type:UIInterpolatingMotionEffectTypeTiltAlongVerticalAxis];
// CATransform3D value for minimumRelativeValue
CATransform3D transMinimumTiltAboutX = CATransform3DIdentity;
transMinimumTiltAboutX.m34 = 1.0 / 500;
transMinimumTiltAboutX = CATransform3DRotate(transMinimumTiltAboutX, tiltAngle * -1.0, 1, 0, 0);
// CATransform3D value for maximumRelativeValue
CATransform3D transMaximumTiltAboutX = CATransform3DIdentity;
transMaximumTiltAboutX.m34 = 1.0 / 500;
transMaximumTiltAboutX = CATransform3DRotate(transMaximumTiltAboutX, tiltAngle, 1, 0, 0);
// Set the transform property boundaries for the interpolation
yTilt.minimumRelativeValue = [NSValue valueWithCATransform3D: transMinimumTiltAboutX];
yTilt.maximumRelativeValue = [NSValue valueWithCATransform3D: transMaximumTiltAboutX];
// Add all of the motion effects to this group
self.motionEffects = #[xShift, yShift, xTilt, yTilt];
[xShift release];
[yShift release];
[xTilt release];
[yTilt release];
}
return self;
}
#end
I used it like this in my custom UICollectionViewCell subclass:
#implementation MyCollectionViewCell
- (void)didUpdateFocusInContext:(UIFocusUpdateContext *)context withAnimationCoordinator:(UIFocusAnimationCoordinator *)coordinator
{
// Create a static instance of the motion effect group (could do this anywhere, really, maybe init would be better - we only need one of them.)
static UIAppleTVMotionEffectGroup *s_atvMotionEffect = nil;
if (s_atvMotionEffect == nil)
{
s_atvMotionEffect = [[UIAppleTVMotionEffectGroup alloc] init];
}
[coordinator addCoordinatedAnimations: ^{
if (self.focused)
{
[self addMotionEffect: s_atvMotionEffect];
}
else
{
[self removeMotionEffect: s_atvMotionEffect];
}
completion: ^{
}];
}
#end
All you need to do is add a UIMotionEffect to your subviews. Something like this
override func didUpdateFocusInContext(context: UIFocusUpdateContext, withAnimationCoordinator coordinator: UIFocusAnimationCoordinator) {
coordinator.addCoordinatedAnimations({ [unowned self] in
if self.focused {
let verticalMotionEffect = UIInterpolatingMotionEffect(keyPath: "center.y", type: .TiltAlongVerticalAxis)
verticalMotionEffect.minimumRelativeValue = -10
verticalMotionEffect.maximumRelativeValue = 10
let horizontalMotionEffect = UIInterpolatingMotionEffect(keyPath: "center.x", type: .TiltAlongHorizontalAxis)
horizontalMotionEffect.minimumRelativeValue = -10
horizontalMotionEffect.maximumRelativeValue = 10
let motionEffectGroup = UIMotionEffectGroup()
motionEffectGroup.motionEffects = [horizontalMotionEffect, verticalMotionEffect]
yourView.addMotionEffect(motionEffectGroup)
}
else {
// Remove the effect here
}
}, completion: nil)
}
I've converted Simon Tillson's answer to swift 3.0 and posted here to save typing for people in the future. Thanks very much for a great solution.
class UIAppleTVMotionEffectGroup : UIMotionEffectGroup{
// size of shift movements
let shiftDistance : CGFloat = 10.0
let tiltAngle : CGFloat = CGFloat(M_PI_4) * 0.125
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
override init() {
super.init()
// Make horizontal movements shift the centre left and right
let xShift = UIInterpolatingMotionEffect(keyPath: "center.x", type: UIInterpolatingMotionEffectType.tiltAlongHorizontalAxis)
xShift.minimumRelativeValue = shiftDistance * -1.0
xShift.maximumRelativeValue = shiftDistance
let yShift = UIInterpolatingMotionEffect(keyPath: "center.y", type: UIInterpolatingMotionEffectType.tiltAlongVerticalAxis)
yShift.minimumRelativeValue = 0.0-shiftDistance
yShift.maximumRelativeValue = shiftDistance
let xTilt = UIInterpolatingMotionEffect(keyPath: "layer.transform", type: UIInterpolatingMotionEffectType.tiltAlongHorizontalAxis)
var transMinimumTiltAboutY = CATransform3DIdentity
transMinimumTiltAboutY.m34 = 1.0 / 500.0
transMinimumTiltAboutY = CATransform3DRotate(transMinimumTiltAboutY, tiltAngle * -1.0, 0, 1, 0)
var transMaximumTiltAboutY = CATransform3DIdentity
transMaximumTiltAboutY.m34 = 1.0 / 500.0
transMaximumTiltAboutY = CATransform3DRotate(transMaximumTiltAboutY, tiltAngle , 0, 1, 0)
xTilt.minimumRelativeValue = transMinimumTiltAboutY
xTilt.maximumRelativeValue = transMaximumTiltAboutY
let yTilt = UIInterpolatingMotionEffect(keyPath: "layer.transform", type: UIInterpolatingMotionEffectType.tiltAlongVerticalAxis)
var transMinimumTiltAboutX = CATransform3DIdentity
transMinimumTiltAboutX.m34 = 1.0 / 500.0
transMinimumTiltAboutX = CATransform3DRotate(transMinimumTiltAboutX, tiltAngle * -1.0, 1, 0, 0)
var transMaximumTiltAboutX = CATransform3DIdentity
transMaximumTiltAboutX.m34 = 1.0 / 500.0
transMaximumTiltAboutX = CATransform3DRotate(transMaximumTiltAboutX, tiltAngle , 1, 0, 0)
yTilt.minimumRelativeValue = transMinimumTiltAboutX
yTilt.maximumRelativeValue = transMaximumTiltAboutX
self.motionEffects = [xShift,yShift,xTilt,yTilt]
}
}
I have added a little pop to the part in the UICollectionView subclass. Note the struct wrapper for the static variable
override func didUpdateFocus(in context: UIFocusUpdateContext, with coordinator: UIFocusAnimationCoordinator) {
struct wrapper {
static let s_atvMotionEffect = UIAppleTVMotionEffectGroup()
}
coordinator.addCoordinatedAnimations( {
var scale : CGFloat = 0.0
if self.isFocused {
self.addMotionEffect(wrapper.s_atvMotionEffect)
scale = 1.2
} else {
self.removeMotionEffect(wrapper.s_atvMotionEffect)
scale = 1.0
}
let transform = CGAffineTransform(scaleX: scale, y: scale)
self.layer.setAffineTransform(transform)
},completion: nil)
}
So here is a code :
static inline CGFloat randomInRange(CGFloat low, CGFloat high) {
CGFloat value = arc4random_uniform(UINT32_MAX) / (CGFloat)UINT32_MAX;
return value * (high - low) +low;
}
static const CGFloat HALO_LOW_ANGLE = 200.0 * M_PI / 180;
static const CGFloat HALO_HIGH_ANGLE = 340.0 * M_PI / 180;
static const CGFloat HALO_SPEED = 100.0;
-(void) spawnHalo {
SKSpriteNode *halo = [SKSpriteNode spriteNodeWithImageNamed:#"Halo"];
halo.position = CGPointMake(randomInRange(halo.size.width / 2, self.size.width - (halo.size.width / 2)), self.size.height + (halo.size.height / 2));
halo.physicsBody = [SKPhysicsBody bodyWithCircleOfRadius:16];
CGVector direction = radiansToVector(randomInRange(HALO_LOW_ANGLE, HALO_HIGH_ANGLE));
halo.physicsBody.velocity = CGVectorMake(direction.dx * HALO_SPEED, direction.dy * HALO_SPEED);
halo.physicsBody.restitution = 1.0;
halo.physicsBody.linearDamping = 0.0;
halo.physicsBody.friction = 0.0;
[self.mainLayer addChild:halo];
I get that first we wanted to get a random value between 0-1. but what i really don't understand is how to calculate the coordinates.
What if i want to spawn the sprite from anywhere, make it the right,left or the bottom of the scene. How do i actually calculate that ?
This should do the trick:
CGPoint randomPosition = CGPointMake(arc4random() % (int)CGRectGetWidth(yourSceneInstance.frame),
arc4random() % (int)CGRectGetHeight(yourSceneInstance.frame));
I am using drawRect for a text display, calling NSString. I am trying to implement using sizeWithFont to auto resizing font (shrinking) with default font size of 17 and using a loop to reduce the font size by 1 if it does not fit the size of width. Can anyone help me how to implement this? Example would be nice right now I just have the font size set to 17.0
[[self.string displayName] drawAtPoint:CGPointMake(xcoord, ycoord) withFont:[UIFont boldSystemFontOfSize:17.0]];
CGSize size = [[self.patient displayName] sizeWithFont:[UIFont boldSystemFontOfSize:17.0]];
max_current_y = size.height > max_current_y ? size.height : max_current_y;
xcoord = xcoord + 3.0f + size.width;
OK never mind. Here's modified version of the same method that takes NSString for which to return a font:
-(UIFont*)getFontForString:(NSString*)string
toFitInRect:(CGRect)rect
seedFont:(UIFont*)seedFont{
UIFont* returnFont = seedFont;
CGSize stringSize = [string sizeWithAttributes:#{NSFontAttributeName : seedFont}];
while(stringSize.width > rect.size.width){
returnFont = [UIFont systemFontOfSize:returnFont.pointSize -1];
stringSize = [string sizeWithAttributes:#{NSFontAttributeName : returnFont}];
}
return returnFont;
}
Here's how to call it:
NSString* stringToDraw = #"Test 123";
CGRect rect = CGRectMake(100., 100., 100., 200.);
UIFont* font = [self getFontForString:stringToDraw toFitInRect:rect seedFont:[UIFont systemFontOfSize:20]];
[stringToDraw drawInRect:rect withFont:font];
Code is for iOS7+
Trying font sizes with step 1.0 may be very slow. You can tremendously improve the algorithm by making two measures for two different sizes, then using linear approximation to guess the size that will be very close to the right one.
If it turns out not close enough, repeat the calculation using the guessed size instead of one of the previous two until it is good enough or stops changing:
// any values will do, prefer those near expected min and max
CGFloat size1 = 12.0, size2 = 56.0;
CGFloat width1 = measure_for_size(size1);
CGFloat width2 = measure_for_size(size2);
while (1) {
CGFloat guessed_size = size1 + (required_width - width1) * (size2 - size1) / (width2 - width1);
width2 = measure_for_size(guessed_size);
if ( fabs(guessed_size-size2) < some_epsilon || !is_close_enough(width2, required_width) ) {
size2 = guessed_size;
continue;
}
// round down to integer and clamp guessed_size as appropriate for your design
return floor(clamp(guessed_size, 6.0, 24.0));
}
is_close_enough() implementation is completely up to you. Given that text width grows almost linearly of font size, you can simply drop it and just do 2-4 iterations which should be enough.
I wanted to try to make a version that didn't have to repeatedly check font sizes using a do...while loop. Instead, I assumed that font point sizes were a linear scale, then worked out the size difference between the required frame width and the actual frame width, then adjusted the font size accordingly. Therefore, I ended up with this function:
+ (CGFloat)fontSizeToFitString:(NSString *)string inWidth:(float)width withFont:(UIFont *)font
{
UILabel *label = [UILabel new];
label.font = font;
label.text = string;
[label sizeToFit];
float ratio = width / label.frame.size.width;
return font.pointSize * ratio;
}
Pass in a font of any size, as well as the string and the required width, and it will return you the point size for that font.
I also wanted to take it a bit further and find out the font size for a multi-line string, so that the longest line would fit without a line break:
+ (CGFloat)fontSizeToFitLongestLineOfString:(NSString *)string inWidth:(float)width withFont:(UIFont *)font
{
NSArray *stringLines = [string componentsSeparatedByString:#"\n"];
UILabel *label = [UILabel new];
label.font = font;
float maxWidth = 0;
for(NSString *line in stringLines)
{
label.text = line;
[label sizeToFit];
maxWidth = MAX(maxWidth, label.frame.size.width);
}
float ratio = width / maxWidth;
return font.pointSize * ratio;
}
Seems to work perfectly fine for me. Hope it helps someone else.
Original poster didn't specify what platform he was working on, but for OSX developers on Mavericks, sizeWithFont: doesn't exist and one should use sizeWithAttributes :
NSSize newSize = [aString sizeWithAttributes:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSFont fontWithName:#"Arial Rounded MT Bold" size:53.0],NSFontAttributeName,nil
]];
Here's a method which can return you font that will fit in a rect:
-(UIFont*)getFontToFitInRect:(CGRect)rect seedFont:(UIFont*)seedFont{
UIFont* returnFont = seedFont;
CGSize stringSize = [self sizeWithFont:returnFont];
while(stringSize.width > rect.size.width){
returnFont = [UIFont systemFontOfSize:returnFont.pointSize -1];
stringSize = [self sizeWithFont:returnFont];
}
return returnFont;
}
You can add this method to a NSString category. You can find more about how to add a category here: http://developer.apple.com/library/ios/#documentation/Cocoa/Conceptual/ProgrammingWithObjectiveC/CustomizingExistingClasses/CustomizingExistingClasses.html#//apple_ref/doc/uid/TP40011210-CH6-SW2
If you don't want to create a category, you can add this method to one of your utility classes and pass in the string for which you want the font to be returned.
Here is another method, inspired by #puru020 & #jowie answers. Hope it helps someone
-(UIFont *) adjustedFontSizeForString:(NSString *)string forWidth:(float)originalWidth forFont:(UIFont *)font
{
CGSize stringSize = [string sizeWithFont:font];
if(stringSize.width <= originalWidth)
{
return font;
}
float ratio = originalWidth / stringSize.width;
float fontSize = font.pointSize * ratio;
return [font fontWithSize:fontSize];
}
I modified a bit the solution of #puru020 , added the support for attributes, and improved a bit:
Note: The method should be wrapped in a NSString Category
- (UIFont*)requiredFontToFitInSize:(CGSize)size seedFont:(UIFont*)seedFont attributes:(NSDictionary*)attributes{
UIFont *returnFont = [UIFont systemFontOfSize:seedFont.pointSize +1];
NSMutableDictionary *mutableAttributes = attributes.mutableCopy;
CGSize stringSize;
do {
returnFont = [UIFont systemFontOfSize:returnFont.pointSize -1];
[mutableAttributes setObject:returnFont forKey:NSFontAttributeName];
stringSize = [self sizeWithAttributes:mutableAttributes];
} while (stringSize.width > size.width);
return returnFont;
}
I have PlayerView class for displaying AVPlayer's playback. Code from documentation.
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#interface PlayerView : UIView
#property (nonatomic) AVPlayer *player;
#end
#implementation PlayerView
+ (Class)layerClass {
return [AVPlayerLayer class];
}
- (AVPlayer*)player {
return [(AVPlayerLayer *)[self layer] player];
}
- (void)setPlayer:(AVPlayer *)player {
[(AVPlayerLayer *)[self layer] setPlayer:player];
}
#end
I set up my AVPlayer (contains video asset with size 320x240) in this PlayerView (with frame.size.width = 100, frame.size.height = 100) and my video is resized. How can i get size of video after adding in PlayerView?
In iOS 7.0 added new feature:
AVPlayerLayer has property videoRect.
This worked for me. When you don't have the AVPLayerLayer.
- (CGRect)videoRect {
// #see http://stackoverflow.com/a/6565988/1545158
AVAssetTrack *track = [[self.player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo] firstObject];
if (!track) {
return CGRectZero;
}
CGSize trackSize = [track naturalSize];
CGSize videoViewSize = self.videoView.bounds.size;
CGFloat trackRatio = trackSize.width / trackSize.height;
CGFloat videoViewRatio = videoViewSize.width / videoViewSize.height;
CGSize newSize;
if (videoViewRatio > trackRatio) {
newSize = CGSizeMake(trackSize.width * videoViewSize.height / trackSize.height, videoViewSize.height);
} else {
newSize = CGSizeMake(videoViewSize.width, trackSize.height * videoViewSize.width / trackSize.width);
}
CGFloat newX = (videoViewSize.width - newSize.width) / 2;
CGFloat newY = (videoViewSize.height - newSize.height) / 2;
return CGRectMake(newX, newY, newSize.width, newSize.height);
}
Found a solution:
Add to PlayerView class:
- (CGRect)videoContentFrame {
AVPlayerLayer *avLayer = (AVPlayerLayer *)[self layer];
// AVPlayerLayerContentLayer
CALayer *layer = (CALayer *)[[avLayer sublayers] objectAtIndex:0];
CGRect transformedBounds = CGRectApplyAffineTransform(layer.bounds, CATransform3DGetAffineTransform(layer.sublayerTransform));
return transformedBounds;
}
Here's the solution that's working for me, takes into account the positioning of the AVPlayer within the view. I just added this to the PlayerView custom class. I had to solve this because doesn't appear that videoRect is working in 10.7.
- (NSRect) videoRect {
NSRect theVideoRect = NSMakeRect(0,0,0,0);
NSRect theLayerRect = self.playerLayer.frame;
NSSize theNaturalSize = NSSizeFromCGSize([[[self.movie asset] tracksWithMediaType:AVMediaTypeVideo][0] naturalSize]);
float movieAspectRatio = theNaturalSize.width/theNaturalSize.height;
float viewAspectRatio = theLayerRect.size.width/theLayerRect.size.height;
if (viewAspectRatio < movieAspectRatio) {
theVideoRect.size.width = theLayerRect.size.width;
theVideoRect.size.height = theLayerRect.size.width/movieAspectRatio;
theVideoRect.origin.x = 0;
theVideoRect.origin.y = (theLayerRect.size.height/2) - (theVideoRect.size.height/2);
}
else if (viewAspectRatio > movieAspectRatio) {
theVideoRect.size.width = movieAspectRatio * theLayerRect.size.height;
theVideoRect.size.height = theLayerRect.size.height;
theVideoRect.origin.x = (theLayerRect.size.width/2) - (theVideoRect.size.width/2);
theVideoRect.origin.y = 0;
}
return theVideoRect;
}
Here is Eric Badros' answer ported to iOS. I also added preferredTransform handling. This assumes _player is an AVPlayer
- (CGRect) videoRect {
CGRect theVideoRect = CGRectZero;
// Replace this with whatever frame your AVPlayer is playing inside of:
CGRect theLayerRect = self.playerLayer.frame;
AVAssetTrack *track = [_player.currentItem.asset tracksWithMediaType:AVMediaTypeVideo][0];
CGSize theNaturalSize = [track naturalSize];
theNaturalSize = CGSizeApplyAffineTransform(theNaturalSize, track.preferredTransform);
theNaturalSize.width = fabs(theNaturalSize.width);
theNaturalSize.height = fabs(theNaturalSize.height);
CGFloat movieAspectRatio = theNaturalSize.width / theNaturalSize.height;
CGFloat viewAspectRatio = theLayerRect.size.width / theLayerRect.size.height;
// Note change this *greater than* to a *less than* if your video will play in aspect fit mode (as opposed to aspect fill mode)
if (viewAspectRatio > movieAspectRatio) {
theVideoRect.size.width = theLayerRect.size.width;
theVideoRect.size.height = theLayerRect.size.width / movieAspectRatio;
theVideoRect.origin.x = 0;
theVideoRect.origin.y = (theLayerRect.size.height - theVideoRect.size.height) / 2;
} else if (viewAspectRatio < movieAspectRatio) {
theVideoRect.size.width = movieAspectRatio * theLayerRect.size.height;
theVideoRect.size.height = theLayerRect.size.height;
theVideoRect.origin.x = (theLayerRect.size.width - theVideoRect.size.width) / 2;
theVideoRect.origin.y = 0;
}
return theVideoRect;
}
Here is a Swift solution based on Andrey Banshchikov's answer. His solution is especially useful when you don't have access to AVPLayerLayer.
func currentVideoFrameSize(playerView: AVPlayerView, player: AVPlayer) -> CGSize {
// See https://stackoverflow.com/a/40164496/1877617
let track = player.currentItem?.asset.tracks(withMediaType: .video).first
if let track = track {
let trackSize = track.naturalSize
let videoViewSize = playerView.bounds.size
let trackRatio = trackSize.width / trackSize.height
let videoViewRatio = videoViewSize.width / videoViewSize.height
var newSize: CGSize
if videoViewRatio > trackRatio {
newSize = CGSize(width: trackSize.width * videoViewSize.height / trackSize.height, height: videoViewSize.height)
} else {
newSize = CGSize(width: videoViewSize.width, height: trackSize.height * videoViewSize.width / trackSize.width)
}
return newSize
}
return CGSize.zero
}
Andrey's answer worked for file/local/downloaded video, but didn't work for streamed HLS video. As a 2nd attempt, I was able to find the video track right inside of the "tracks" property of currentItem. Also rewritten to Swift.
private func videoRect() -> CGRect {
// Based on https://stackoverflow.com/a/40164496 - originally objective-c
var trackTop: AVAssetTrack? = nil
if let track1 = self.player?.currentItem?.asset.tracks(withMediaType: AVMediaType.video).first {
trackTop = track1
}
else {
// For some reason the above way wouldn't find the "track" for streamed HLS video.
// This seems to work for streamed HLS.
if let tracks = self.player?.currentItem?.tracks {
for avplayeritemtrack in tracks {
if let assettrack = avplayeritemtrack.assetTrack {
if assettrack.mediaType == .video {
// Found an assetTrack here?
trackTop = assettrack
break
}
}
}
}
}
guard let track = trackTop else {
print("Failed getting track")
return CGRect.zero
}
let trackSize = track.naturalSize
let videoViewSize = self.view.bounds.size
let trackRatio = trackSize.width / trackSize.height
let videoViewRatio = videoViewSize.width / videoViewSize.height
let newSize: CGSize
if videoViewRatio > trackRatio {
newSize = CGSize.init(width: trackSize.width * videoViewSize.height / trackSize.height, height: videoViewSize.height)
}
else {
newSize = CGSize.init(width: videoViewSize.width, height: trackSize.height * videoViewSize.width / trackSize.width)
}
let newX: CGFloat = (videoViewSize.width - newSize.width) / 2
let newY: CGFloat = (videoViewSize.height - newSize.height) / 2
return CGRect.init(x: newX, y: newY, width: newSize.width, height: newSize.height)
}
Much simpler approach
I found another way. Seems to be much easier, if you're using AVPlayerViewController. Why didn't I find this earlier. π€§
return self.videoBounds
2022 SwiftUI
struct PlayerViewController: UIViewControllerRepresentable {
private let avController = AVPlayerViewController()
var videoURL: URL?
private var player: AVPlayer {
return AVPlayer(url: videoURL!)
}
// ππΌππΌππΌ HERE ππΌππΌππΌ
func getVideoFrame() -> CGRect {
self.avController.videoBounds
}
// ππΌππΌππΌ HERE ππΌππΌππΌ
func makeUIViewController(context: Context) -> AVPlayerViewController {
avController.modalPresentationStyle = .fullScreen
avController.player = player
avController.player?.play()
return avController
}
func updateUIViewController(_ playerController: AVPlayerViewController, context: Context) {}
}
I'm currently using this technique to get the color of a pixel in a UIimage. (on Ios)
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
CGImageRef inImage = self.image.CGImage;
// Create off screen bitmap context to draw the image into. Format ARGB is 4 bytes for each pixel: Alpa, Red, Green, Blue
CGContextRef cgctx = [self createARGBBitmapContextFromImage:inImage];
if (cgctx == NULL) { return nil; /* error */ }
size_t w = CGImageGetWidth(inImage);
size_t h = CGImageGetHeight(inImage);
CGRect rect = {{0,0},{w,h}};
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(cgctx, rect, inImage);
// Now we can get a pointer to the image data associated with the bitmap
// context.
unsigned char* data = CGBitmapContextGetData (cgctx);
if (data != NULL) {
//offset locates the pixel in the data from x,y.
//4 for 4 bytes of data per pixel, w is width of one row of data.
int offset = 4*((w*round(point.y))+round(point.x));
int alpha = data[offset];
int red = data[offset+1];
int green = data[offset+2];
int blue = data[offset+3];
NSLog(#"offset: %i colors: RGB A %i %i %i %i",offset,red,green,blue,alpha);
color = [UIColor colorWithRed:(red/255.0f) green:(green/255.0f) blue:(blue/255.0f) alpha:(alpha/255.0f)];
}
// When finished, release the context
CGContextRelease(cgctx);
// Free image data memory for the context
if (data) { free(data); }
return color;
}
As illustrated here;
http://www.markj.net/iphone-uiimage-pixel-color/
it works quite well, but when working with images larger than the UIImageView it fails. I tried adding an image and changing the scaling mode to fit the view. How would I modify the code to so that it would still be able to sample the pixel color with a scaled image.
try this for swift3
func getPixelColor(image: UIImage, x: Int, y: Int, width: CGFloat) -> UIColor
{
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(width) * y) + x) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Here's a pointer:
0x3A28213A //sorry, I couldn't resist the joke
For real now: after going through the comments on the page at markj.net, a certain James has suggested to make the following changes:
size_t w = CGImageGetWidth(inImage); //Written by Mark
size_t h = CGImageGetHeight(inImage); //Written by Mark
float xscale = w / self.frame.size.width;
float yscale = h / self.frame.size.height;
point.x = point.x * xscale;
point.y = point.y * yscale;
(thanks to http://www.markj.net/iphone-uiimage-pixel-color/comment-page-1/#comment-2159)
This didn't actually work for me... Not that I did much testing, and I'm not the world's greatest programmer (yet)...
My solution was to scale the UIImageView in such a way that each pixel of the image in it was the same size as a standard CGPoint on the screen, then I took my color like normal (using getPixelColorAtLocation:(CGPoint)point) , then I scaled the image back to the size I wanted.
Hope this helps!
Use the UIImageView Layer:
- (UIColor*) getPixelColorAtLocation:(CGPoint)point {
UIColor* color = nil;
UIGraphicsBeginImageContext(self.frame.size);
CGContextRef cgctx = UIGraphicsGetCurrentContext();
if (cgctx == NULL) { return nil; /* error */ }
[self.layer renderInContext:cgctx];
unsigned char* data = CGBitmapContextGetData (cgctx);
/*
...
*/
UIGraphicsEndImageContext();
return color;
}