How to use custom video resolution when use AVFoundation and AVCaptureVideoDataOutput on mac - size

I need to process each frame of captured video frame, although AVCaptureDevice.formats provided so many different dimension of frame sizes, it seems AVCaptureSession only support those frame sizes defined in presets.
I've also tried to set AVCaptureDevice.activeFormat before AVCaptureInputDevice or after, no matter what setting I set, if I set AVCaptureSessionPresetHigh in AVCaptureSession, it always give me a frame of 1280x720. Similar , If i set AVCaptureSessionPreset 640x480, then I can only get frame size of 640x480.
So, How can I set a custom video frame size like 800x600?
Using Media Foundation under windows or V4L2 under linux, it's easy to set any custom frame size when capture.
It seems not possible to do this under mac.

AFAIK there isn't a way to do this. All the code I've seen to do video capture uses the presets.
The documentation for AVCaptureVideoDataOutput for the video settings property says
The only key currently supported is the kCVPixelBufferPixelFormatTypeKey key.
so the other answers of passing in video settings won't work and it will just ignore these parameters.

Set kCVPixelBufferWidthKey and kCVPixelBufferHeightKey options on AVCaptureVideoDataOutput object. Minimal sample as below ( add error check ).
_sessionOutput = [[AVCaptureVideoDataOutput alloc] init];
NSDictionary *pixelBufferOptions = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:width], (id)kCVPixelBufferWidthKey,
[NSNumber numberWithDouble:height], (id)kCVPixelBufferHeightKey,
[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA], (id)kCVPixelBufferPixelFormatTypeKey,
nil];
[_sessionOutput setVideoSettings:pixelBufferOptions];
Note: This width/height will override session preset width/height (if different).

Use videoSettings property of AVCapturwVideoDataOutput to describe the pixel format, width and height In a dictionary.

Related

CLBeacon - how to change rssi, major and minor?

My question is basically - how do I get to modify iBeacon's default settings like major, minor and RSSI?
There are different ways to set these values depending on what you mean by an iBeacon:
Hardware iBeacons
Each Beacon vendor has different ways of setting these values. Some are changed via a bluetooth service that is typically managed with a proprietary iOS or Android app. (Examples include Radius Networks' battery-powered and USB beacons and TwoCanoes beacons.) Radius Networks' PiBeacon includes an SD card with an editable file containing the identifiers. Other vendors like Estimote create beacons with fixed UUIDs that cannot be changed. Because there is no standard mechanism, there is no universal tool for setting identifiers on all beacon types.
iOS Software iBeacons:
You set these values with code like below:
CLBeaconRegion *region = [[CLBeaconRegion alloc] initWithProximityUUID:[[NSUUID alloc] initWithUUIDString:#"2F234454-CF6D-4A0F-ADF2-F4911BA9FFA6"] major:1 minor:1 identifier:#"com.radiusnetworks.iBeaconExample"];
NSDictionary *peripheralData = [region peripheralDataWithMeasuredPower:-55];
[_peripheralManager startAdvertising:peripheralData];
The iOS CLBeacon class
The CLBeacon class is not designed to be created or modified by the user -- it is supposed to be constructed by CoreLocation when it detects iBeacons. That said, you can force writing to its read-only properties using KVO syntax like so:
CLBeacon * iBeacon = [[CLBeacon alloc] init];
[iBeacon setValue:[NSNumber numberWithInt:1] forKey:#"major"];
[iBeacon setValue:[NSNumber numberWithInt:1] forKey:#"minor"];
[iBeacon setValue:[NSNumber numberWithInt:-55] forKey:#"rssi"];
[iBeacon setValue:[[NSUUID alloc] initWithUUIDString:#"2F234454-CF6D-4A0F-ADF2-F4911BA9FFA6"] forKey:#"proximityUUID"];
NSLog(#"I constructed this iBeacon manually: %#", iBeacon);
However, if you are forcing the CLBeacon class to be used in ways it was not designed to be used that might mean you are doing something wrong.
Full disclosure: I work for Radius Networks.
When you initialize a CLBeaconRegion object you can specify Major and Minor variables. Took a look at initWithProximityUUID:major:minor:identifier method.
As far as am aware ones a beacon active your cannot change it value unless you recreate that object. Rssi represents signal strength of the beacon which is only read-only and depends on the environment.
Here the link for the (documentation](https://developer.apple.com/library/iOs/documentation/CoreLocation/Reference/CLBeaconRegion_class/Reference/Reference.html#//apple_ref/doc/uid/TP40013054)

How to get suitable CGImage from combined TIFF for display scale

I have two PNGs in a Mac project. Normal and #2x. Xcode combines these into a single TIFF with the #2x being at index 0 and the #1x at index 1.
What is the suggested approach to get the appropriate image as CGImageRef version (for use with Quartz) for the current display scale?
I can get the image manually via CGImageSource:
NSBundle *mainBundle = [NSBundle mainBundle];
NSURL *URL = [mainBundle URLForResource:#"Canvas-Bkgd-Tile" withExtension:#"tiff"];
CGImageSourceRef source = CGImageSourceCreateWithURL((__bridge CFURLRef)(URL), NULL);
_patternImage = CGImageSourceCreateImageAtIndex(source, 1, NULL); // index 1 is #1x, index 0 is #2x
CFRelease(source);
I also found this to be working, but I am not certain that this will return the Retina version on a Retina display:
NSImage *patternImage = [NSImage imageNamed:#"Canvas-Bkgd-Tile.tiff"];
_patternImage = [patternImage CGImageForProposedRect:NULL context:nil hints:nil];
CGImageRetain(_patternImage); // retain image, because NSImage goes away
An acceptable answer to this question either provides a solution how to get the CGImage suitable from a combined multi-resolution TIFF, or explains why the second approach here is working. Or what changes are required.
I am opting to answer on "why the second approach here is working".
In one of the WWDC videos published since 2010, they said that :
+[NSImage imageNamed:] chooses the best image representation object available for the current display.
So chances are that you are calling this class method from within a locked focus context (e.g. within a drawRect: method or similar), or maybe you actually called lockFocus yourself. Anyway, the result is that you get the most suitable image. But only when calling +[NSImage imageNamed:].
EDIT: Found it here:
http://adcdownload.apple.com//wwdc_2012/wwdc_2012_session_pdfs/session_213__introduction_to_high_resolution_on_os_x.pdf
Search for the keyword "best" in the slides: "NSImage automatically chooses best representation […]".
So, your second version will return the Retina version on a Retina display, you can be certain of it, it is advertised in the documentation[*].
[*] This will only work if you provide valid artwork.

Get Volume of AVPlayer in iOS

There are plenty of questions asking how to set the volume of an AVPlayer, but how do you get the current volume of the player in iOS?
For example, I am trying to fade a song out from its current level. I could save the volume elsewhere and refer to it, but would rather read the value directly from the AVPlayer.
AVPlayer is contains one or more AVPlayerItem objects, and it is through these objects that you can get and set audio levels for media played by an AVPlayer. Head to the AVPlayerItem docs and look at the audioMix property, and also check out my answer to a slightly different question that should still provide some info.
Following up after your comment, this is (I think) how you would get the volume values from the - (BOOL)getVolumeRampForTime:(CMTime)time startVolume:(float *)startVolume endVolume:(float *)endVolume timeRange:(CMTimeRange *)timeRange method:
// Get your AVAudioMixInputParameters instance, here called audioMixInputParameters
// currentTime is the current playhead time of your media
float startVolume;
float endVolume;
CMTimeRange timeRange;
bool success = [audioMixInputParameters getVolumeRampForTime: currentTime
startVolume: &startVolume
endVolume: &endVolume
timeRange: &timeRange];
// startVolume and endVolume should now be set
NSLog(#"Start volume: %f | End volume: %f", startVolume, endVolume);
According to Apple's AVPlayer documentation for OS X, it lists a volume property, but the documentation for the same class in iOS doesn't show one listed. Would your project allow you to use AVAudioPlayer instead? That one does have a synthesized volume property for iOS that's much more easily set/retrieved.
You could use the volume property of the AVPlayer class. Here's the AVPlayer class reference link. Quoting from it:
volume
Indicates the current audio volume of the player.
#property(nonatomic) float volume
Discussion
0.0 means “silence all audio,” 1.0 means “play at the full volume of the current item.”
Availability
Available in OS X v10.7 and later.
Declared In
AVPlayer.h
edit:
You could try geting the system volume instead. This link provides 2 ways.

Frame synchronization with AVPlayer

I'm having an issue syncing external content in a CALayer with an AVPlayer at high precision.
My first thought was to lay out an array of frames (equal to the number of frames in the video) within a CAKeyframeAnimation and sync with an AVSynchronizedLayer. However, upon stepping through the video frame-by-frame, it appears that AVPlayer and Core Animation redraw on different cycles, as there is a slight (but noticeable) delay between them before they sync up.
Short of processing and displaying through Core Video, is there a way to accurately sync with an AVPlayer on the frame level?
Update: February 5, 2012
So far the best way I've found to do this is to pre-render through AVAssetExportSession coupled with AVVideoCompositionCoreAnimationTool and a CAKeyFrameAnimation.
I'm still very interested in learning of any real-time ways to do this, however.
What do you mean by 'high precision?'
Although the docs claim that an AVAssetReader is not designed for real-time usage, in practice I have had no problems reading video in real-time using it (cf https://stackoverflow.com/a/4216161/42961). The returned frames come with a 'Presentation timestamp' which you can fetch using CMSampleBufferGetPresentationTimeStamp.
You'll want one part of the project to be the 'master' timekeeper here. Assuming your CALayer animation is quick to compute and doesn't involve potentially blocky things like disk access, I'd use that as the master time source. When you need to draw content (eg in the draw selector on your UIView subclass) you should read currentTime from the CALayer animation, if necessary proceed through the AVAssetReader's video frames using copyNextSampleBuffer until CMSampleBufferGetPresentationTimeStamp returns >= currentTime, draw the frame, and then draw the CALayer animation content over the top.
If your player is using an AVURLAsset, did you load it with the precise duration flag set? I.e. something like:
NSDictionary *options = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
AVURLAsset *urlAsset = [AVURLAsset URLAssetWithURL:aUrl options:options];

How to set Segment in UISegmentControl programmatically?

I'm setting UISegmentControl programmatically in my iPhone app. By default it has 2 segment. In my code I'm populating more than two segments. How do I set this, any help?
Update
My question is how do I put more than 2 tabs on segmentController by code?
First of all segmented control in iOS is of UISegmentedControl class, not NS...
To create it with any number of segments you want you can use initWithItems: initialize method - pass array of titles (NSStrings) or images for each segments. e.g.:
UISegmentedControl *segControl = [[UISegmentedControl alloc] initWithItems:[NSArray arrayWithObjects:#"1", #"2", #"3", #"4", nil]];
Later you can change you control using insertSegmentWithImage:atIndex:animated:, insertSegmentWithTitle:atIndex:animated: or/and removeSegmentAtIndex:animated: methods.
You can find descriptions on those (and some more!) methods in apple docs.
Before your edit, you were actually talking about UISegmentedControl and to set the selected one programatically, you want to use the selectedSegmentIndex property (the documentation for which I've linked for you).
And to add in additional segments, you can use insertSegmentWithTitle:atIndex:animated:.