Proper usage of CIDetectorTracking - objective-c

Apple recently added a new constant to the CIDetector class called CIDetectorTracking which appears to be able to track faces between frames in a video. This would be very beneficial for me if I could manage to figure out how it works..
I've tried adding this key to the detectors options dictionary using every object I can think of that is remotely relevant including, my AVCaptureStillImageOutput instance, the UIImage I'm working on, YES, 1, etc.
NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy,myAVCaptureStillImageOutput,CIDetectorTracking, nil];
But no matter what parameter I try to pass, it either crashes (obviously I'm guessing at it here) or the debugger outputs:
Unknown CIDetectorTracking specified. Ignoring.
Normally, I wouldn't be guessing at this, but resources on this topic are virtually nonexistent. Apple's class reference states:
A key used to enable or disable face tracking for the detector. Use
this option when you want to track faces across frames in a video.
Other than availability being iOS 6+ and OS X 10.8+ that's it.
Comments inside CIDetector.h:
/*The key in the options dictionary used to specify that feature
tracking should be used. */
If that wasn't bad enough, a Google search provides 7 results (8 when they find this post) all of which are either Apple class references, API diffs, a SO post asking how to achieve this in iOS 5, or 3rd party copies of the former.
All that being said, any hints or tips for the proper usage of CIDetectorTracking would be greatly appreciated!

You're right, this key is not very well documented. Beside the API docs it is also not explained in:
the CIDetector.h header file
the Core Image Programming Guide
the WWDC 2012 Session "520 - What's New in Camera Capture"
the sample code to this session (StacheCam 2)
I tried different values for CIDetectorTracking and the only accepted values seem to be #(YES) and #(NO). With other values it prints this message in the console:
Unknown CIDetectorTracking specified. Ignoring.
When you set the value to #(YES) you should get tracking id's with the detected face features.
However when you want to detect faces in content captured from the camera you should prefer the face detection API in AVFoundation. It has face tracking built in and the face detection happens in the background on the GPU and will be much faster than CoreImage face detection
It requires iOS 6 and at least an iPhone 4S or iPad 2.
The face are sent as metadata objects (AVMetadataFaceObject) to the AVCaptureMetadataOutputObjectsDelegate.
You can use this code (taken from StacheCam 2 and the slides of the WWDC session mentioned above) to setup face detection and get face metadata objects:
- (void) setupAVFoundationFaceDetection
{
self.metadataOutput = [AVCaptureMetadataOutput new];
if ( ! [self.session canAddOutput:self.metadataOutput] ) {
return;
}
// Metadata processing will be fast, and mostly updating UI which should be done on the main thread
// So just use the main dispatch queue instead of creating a separate one
// (compare this to the expensive CoreImage face detection, done on a separate queue)
[self.metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[self.session addOutput:self.metadataOutput];
if ( ! [self.metadataOutput.availableMetadataObjectTypes containsObject:AVMetadataObjectTypeFace] ) {
// face detection isn't supported (via AV Foundation), fall back to CoreImage
return;
}
// We only want faces, if we don't set this we would detect everything available
// (some objects may be expensive to detect, so best form is to select only what you need)
self.metadataOutput.metadataObjectTypes = #[ AVMetadataObjectTypeFace ];
}
// AVCaptureMetadataOutputObjectsDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputMetadataObjects:(NSArray *)metadataObjects
fromConnection:(AVCaptureConnection *)c
{
for ( AVMetadataObject *object in metadataObjects ) {
if ( [[object type] isEqual:AVMetadataObjectTypeFace] ) {
AVMetadataFaceObject* face = (AVMetadataFaceObject*)object;
CMTime timestamp = [face time];
CGRect faceRectangle = [face bounds];
NSInteger faceID = [face faceID];
CGFloat rollAngle = [face rollAngle];
CGFloat yawAngle = [face yawAngle];
NSNumber* faceID = #(face.faceID); // use this id for tracking
// Do interesting things with this face
}
}
If you want to display the face frames in the preview layer you need to get the transformed face object:
AVMetadataFaceObject * adjusted = (AVMetadataFaceObject*)[self.previewLayer transformedMetadataObjectForMetadataObject:face];
For details check out the sample code from WWDC 2012.

Related

Changing setPreferredIOBufferDuration at Runtime results in Core Audio Error -50

I am writing an Audio Unit (remote IO) based app that displays waveforms at a given buffer size. The app initially starts off with a preferred buffer size of 0.0001 which results in very small buffer frame sizes (i think its 14 frames). Than at runtime I have a UI element that allows switching buffer frame sizes via AVAudioSession's method setPreferredIOBufferDuration:Error:.
Here is the code where the first two cases change from a smaller to a larger sized buffer. 3-5 are not specified yet. But the app crashes at AudioUnitRender with -50 error code.
- (void)setBufferSizeFromMode:(int)mode {
NSTimeInterval bufferDuration;
switch (mode) {
case 1:
bufferDuration = 0.0001;
break;
case 2:
bufferDuration = 0.001;
break;
case 3:
bufferDuration = 0.0; // reserved
break;
case 4:
bufferDuration = 0.0; // reserved
break;
case 5:
bufferDuration = 0.0; // reserved
break;
default:
break;
}
AVAudioSession *session = [AVAudioSession sharedInstance];
NSError * audioSessionError = nil;
[session setPreferredIOBufferDuration:bufferDuration error:&audioSessionError];
if (audioSessionError) {
NSLog(#"Error %ld, %#",
(long)audioSessionError.code, audioSessionError.localizedDescription);
}
}
Based on reading the CoreAudio and AVFoundation documentation, I was led to believe that you can change audio hardware configuration at runtime. There may be some gaps in audio or distortion but I am fine with that for now. Is there an obvious reason for this crash? Or must I reinitialize everything (my audio session, my audio unit, my audio buffers, etc..) for each change of the buffer duration?
Edit: I have tried calling AudioOutputUnitStop(self.myRemoteIO); before changing the session buffer duration and than starting again after it is set. I've also tried setting the session to inactive and than reactivating it but both result with the -50 OSStatus from AudioUnitRender() in my AU input callback.
A -50 error usually means the audio unit code is trying to set or use an invalid parameter value.
Some iOS devices don't support actual buffer durations below 5.3 mS (or 0.0058 seconds on older devices). And iOS devices appear free to switch to an actual buffer duration 4X longer than that, or even alternate slightly different values, at times not under the apps control.
The inNumberFrames is given to the audio unit callback as a parameter, your app can't arbitrarily specify that value.
If you want to process given buffer sizes, pull them out of an intermediating lock-free circular FIFO, which the audio unit callback can feed into.
Also: Try waiting a second or so after calling audio stop before changing parameters or restarting. There appears to be a delay between when you call stop, and when the hardware actually stops.

Memory management issue with CIImage / CGImageRef

Good morning,
I encounter a memory management issue in the video processing software i'm trying to write. (video capture + (almost)real-time processing + display + recording).
The following code is part of the "..didOutputSampleBuffer.." function of AVCaptureVideoDataOutputSampleBufferDelegate.
capturePreviewLayer is a CALayer.
ctx is a CIContext I reuse over and over.
outImage is a vImage_Buffer.
With the commented section kept commented, memory usage is stable and acceptable, but if I uncomment it, memory won't stop increasing. Note that if I leave the filtering operation commented and only keep CIImage creation and conversion back to CGImageRef, the problem remains. (I mean : I don't think this is related to the filter itself).
If I run the XCode's Analyse, it points a potential memory leak if this part is uncommented, but none if it is commented.
Does anybody has an idea to explain and fix this ?
Thank you very much !
Note : I prefer not to use AVCaptureVideoPreviewLayer and its filters property.
CGImageRef convertedImage = vImageCreateCGImageFromBuffer(&outImage, &outputFormat, NULL, NULL, 0, &err);
//CIImage * img = [CIImage imageWithCGImage:convertedImage];
////[acc setValue:img forKey:#"inputImage"];
////img = [acc valueForKey:#"outputImage"];
//convertedImage = [self.ctx createCGImage:img fromRect:img.extent];
dispatch_sync(dispatch_get_main_queue(), ^{
self.capturePreviewLayer.contents = (__bridge id)(convertedImage);
});
CGImageRelease(convertedImage);
free(outImage.data);
Both vImageCreateCGImageFromBuffer() and -[CIContext createCGImage:fromRect:] give you a reference you are responsible for releasing. You are only releasing one of them.
When you replace the value of convertedImage with the new CGImageRef, you are losing the reference to the previous one without releasing it. You need to add another call to CGImageRelease(convertedImage) after your last use of the old image and before you lose that reference.

Changing Philips Hue Brightness with a UISlider Objective C

I am building an app that controls three philips hue RGB LED bulbs. I want to be able to change the brightness using the UISlider. Currently I have a UISlider that calls a method upon each change, however, this method far exceeds the philips hue bridge's 10 commands per second limitation. Here is the method I call upon a change in the UI slider.
- (void) changeBulbBrightness: (NSNumber *)currentBrightness
{
NSTimeInterval timeInterval = [self.timeLastCommandSent timeIntervalSinceNow];
NSLog(#"Time Since Last command: %f", timeInterval);
if (timeInterval < -0.3)
{
NSLog(#"COMMAND SENT!!!!");
PHBridgeResourcesCache *cache = [PHBridgeResourcesReader readBridgeResourcesCache];
PHBridgeSendAPI *bridgeSendAPI = [[PHBridgeSendAPI alloc] init];
for (PHLight *light in cache.lights.allValues)
{
PHLightState *lightState = light.lightState;
//PHLightState *lightState = [[PHLightState alloc] init];
if (lightState.on)
{
[lightState setBrightness:currentBrightness];
// Send lightstate to light
[bridgeSendAPI updateLightStateForId:light.identifier withLightState:lightState completionHandler:^(NSArray *errors) {
/*if (errors != nil) {
NSString *message = [NSString stringWithFormat:#"%#: %#", NSLocalizedString(#"Errors", #""), errors != nil ? errors : NSLocalizedString(#"none", #"")];
if (self.loggingOn)
{
NSLog(#"Brightness Change Response: %#",message);
}
}
*/
}];
}
self.timeLastCommandSent = [[NSDate alloc]init];
}
}
self.appDelegate.currentSetup.brightnessSetting = currentBrightness;
NSLog(#"Brightness Now = %#", currentBrightness);
I've tried making a timer to limit the amount of commands to 10 per second but the bridge still acts the same way it does when it is overwhelmed with commands (stops acceptance of all commands). Any help or direction would be greatly appreciated. Thanks in advance!
One reason might be your multiple lights. You are sending an update command for each light. So if you have 3 bulbs connected as in the Hue starter kit, you might still send 10 or a little more if there is some unfortunate caching involved packing the updates from 2 seconds into 1 second of sending. Thus I suggest you further decrease the number of updates you are sending (try 0.5 or even 1.0) as an interval and see if it gets better.
Also note that the SDK is quite vague about the rate limit. It says:
If you stay roughly around 10 commands per second
Since the Philips Hue SDK is generally not that well supported (look at the open GitHub issues), take this with a grain of salt and do your own experiments. Once I have the time to check it myself, I will post an update here.
Update 1: I just discovered this remark by one of the contributors to the Hue SDK github repo (maybe Philips employee) advising to only send 2 commands per second:
As mentioned earlier, be careful when doing lot's of requests in a loop as the calls to PHBridgeSendAPI are not blocking and requests are retained until they get a response or timeout. Two calls per second seems like a safe rate, if you want to have a higher rate, it's advised to chain requests with a queuing mechanism of your own (to ensure memory will be released before new requests are getting called).

How to dynamically typecast objects to support different versions of an application's ScriptingBridge header files?

Currently I'm trying to implement support for multiple versions of iTunes via ScriptingBridge.
For example the method signature of the property playerPosition changed from (10.7)
#property NSInteger playerPosition; // the player’s position within the currently playing track in seconds.
to (11.0.5)
#property double playerPosition; // the player’s position within the currently playing track in seconds
With the most current header file in my application and an older iTunes version the return value of this property would always be 3. Same thing goes the other way around.
So I went ahead and created three different iTunes header files, 11.0.5, 10.7 and 10.3.1 via
sdef /path/to/application.app | sdp -fh --basename applicationName
For each version of iTunes I adapted the basename to inlcude the version, e.g. iTunes_11_0_5.h.
This results in the interfaces in the header files to be prefixed with their specific version number.
My goal is/was to typecast the objects I'd use with the interfaces of the correct version.
The path to iTunes is fetched via a NSWorkspace method, then I'm creating a NSBundle from it and extract the CFBundleVersion from the infoDictionary.
The three different versions (11.0.5, 10.7, 10.3.1) are also declared as constants which I compare to the iTunes version of the user via
[kiTunes_11_0_5 compare:versionInstalled options:NSNumericSearch]
Then I check if each result equals NSOrderedSame, so I'll know which version of iTunes the user has installed.
Implementing this with if statement got a bit out of hand, as I'd need to do these typecasts at many different places in my class and I then started to realize that this will result in a lot of duplicate code and tinkered around and thought about this to find a different solution, one that is more "best practice".
Generally speaking, I'd need to dynamically typecast the objects I use, but I simply can't find a solution which wouldn't end in loads of duplicated code.
Edit
if ([kiTunes_11_0_5 compare:_versionString options:NSNumericSearch] == NSOrderedSame) {
NSLog(#"%#, %#", kiTunes_11_0_5, _versionString);
playerPosition = [(iTunes_11_0_5_Application*)_iTunes playerPosition];
duration = [(iTunes_11_0_5_Track*)_currentTrack duration];
finish = [(iTunes_11_0_5_Track*)_currentTrack finish];
} else if [... and so on for each version to test and cast]
[All code directly entered into answer.]
You could tackle this with a category, a proxy, or a helper class, here is a sketch of one possible design for the latter.
First create a helper class which takes and instance of your iTunes object and the version string. Also to avoid doing repeated string comparisons do the comparison once in the class setup. You don't give the type of your iTunes application object so we'll randomly call it ITunesAppObj - replace with the correct type:
typedef enum { kEnumiTunes_11_0_5, ... } XYZiTunesVersion;
#implementation XYZiTunesHelper
{
ITunesAppObj *iTunes;
XYZiTunesVersion version;
}
- (id) initWith:(ITunesAppObj *)_iTunes version:(NSString *)_version
{
self = [super self];
if (self)
{
iTunes = _iTunes;
if ([kiTunes_11_0_5 compare:_version options:NSNumericSearch] == NSOrderedSame)
version = kEnumiTunes_11_0_5;
else ...
}
return self;
}
Now add an item to this class for each item which changes type between versions, declaring it with whatever "common" type you pick. E.g. for playerPosition this might be:
#interface XYZiTunesHelper : NSObject
#property double playerPosition;
...
#end
#implementation XYZiTunesHelper
// implement getter for playerPosition
- (double) playerPosition
{
switch (version)
{
case kEnumiTunes_11_0_5:
return [(iTunes_11_0_5_Application*)_iTunes playerPosition];
// other cases - by using an enum it is both fast and the
// compiler will check you cover all cases
}
}
// now implement the setter...
Do something similar for track type. Your code fragment then becomes:
XYZiTunesHelper *_iTunesHelper = [[XYZiTunesHelper alloc] init:_iTunes
v ersion:_versionString];
...
playerPosition = [_iTunesHelper playerPosition];
duration = [_currentTrackHelper duration];
finish = [_currentTrackHelper finish];
The above is dynamic as you requested - at each call there is a switch to invoke the appropriate version. You could of course make the XYZiTunesHelper class abstract (or an interface or a protocol) and write three implementations of it one for each iTunes version, then you do the test once and select the appropriate implementation. This approach is more "object oriented", but it does mean the various implementations of, say, playerPosition are not together. Pick whichever style you feel most comfortable with in this particular case.
HTH
Generating multiple headers and switching them in and out based on the application's version number is a really bad 'solution': aside from being horribly complicated, it is very brittle since it couples your code to specific iTunes versions.
Apple events, like HTTP, were designed by people who understood how to construct large, flexible long-lived distributed systems whose clients and servers could evolve and change over time without breaking each other. Scripting Bridge, like a lot of the modern 'Web', was not.
...
The correct way to retrieve a specific type of value is to specify your required result type in the 'get' event. AppleScript can do this:
tell app "iTunes" to get player position as real
Ditto objc-appscript, which provides convenience methods specifically for getting results as C numbers:
ITApplication *iTunes = [ITApplication applicationWithBundleID: #"com.apple.itunes"];
NSError *error = nil;
double pos = [[iTunes playerPosition] getDoubleWithError: &error];
or, if you'd rather get the result as an NSNumber:
NSNumber *pos = [[iTunes playerPosition] getWithError: &error];
SB, however, automatically sends the 'get' event for you, giving you no what to tell it what type of result you want before it returns it. So if the application decides to return a different type of value for any reason, SB-based ObjC code breaks from sdp headers onwards.
...
In an ideal world you'd just ditch SB and go use objc-appscript which, unlike SB, knows how to speak Apple events correctly. Unfortunately, appscript is no longer maintained thanks to Apple legacying the original Carbon Apple Event Manager APIs without providing viable Cocoa replacements, so isn't recommended for new projects. So you're pretty much stuck with the Apple-supplied options, neither of which is good or pleasant to use. (And then they wonder why programmers hate everything AppleScript so much...)
One solution would be to use AppleScript via the AppleScript-ObjC bridge. AppleScript may be a lousy language, but at least it knows how to speak Apple events correctly. And ASOC, unlike Cocoa's crappy NSAppleScript class, takes most of the pain out of gluing AS and ObjC code together in your app.
For this particular problem though, it is possible to monkey-patch around SB's defective glues by dropping down to SB's low-level methods and raw four-char codes to construct and send the event yourself. It's a bit tedious to write, but once it's done it's done (at least until the next time something changes...).
Here's a category that shows how to do this for the 'player position' property:
#implementation SBApplication (ITHack)
-(double)iTunes_playerPosition {
// Workaround for SB Fail: older versions of iTunes return typeInteger while newer versions
// return typeIEEE64BitFloatingPoint, but SB is too stupid to handle this correctly itself
// Build a reference to the 'player position' property using four-char codes from iTunes.sdef
SBObject *ref = [self propertyWithCode:'pPos'];
// Build and send the 'get' event to iTunes (note: while it is possible to include a
// keyAERequestedType parameter that tells the Apple Event Manager to coerce the returned
// AEDesc to a specific number type, it's not necessary to do so as sendEvent:id:parameters:
// unpacks all numeric AEDescs as NSNumber, which can perform any needed coercions itself)
NSNumber *res = [self sendEvent:'core' id:'getd' parameters: '----', ref, nil];
// The returned value is an NSNumber containing opaque numeric data, so call the appropriate
// method (-integerValue, -doubleValue, etc.) to get the desired representation
return [res doubleValue];
}
#end
Notice I've prefixed the method name as iTunes_playerPosition. Unlike objc-appscript, which uses static .h+.m glues, SB dynamically creates all of its iTunes-specific glue classes at runtime, so you can't add categories or otherwise patch them directly. All you can do is add your category to the root SBObject/SBApplication class, making them visible across all classes in all application glues. Swizzling the method names should avoid any risk of conflict with any other applications' glue methods, though obviously you still need to take care to call them on the right objects otherwise you'll likely get unexpected results or errors.
Obviously, you'll have to repeat this patch for any other properties that have undergone the same enhancement in iTunes 11, but at least once done you won't have to change it again if, say, Apple revert back to integers in a future release or if you've forgotten to include a previous version in your complicated switch block. Plus, of course, you won't have to mess about generating multiple iTunes headers: just create one for the current version and remember to avoid using the original -playerPosition and other broken SB methods in your code and use your own robust iTunes_... methods instead.

How to use ios 6 challenge in game centre

Firstly,
I am fairly new to objective c / xcode dev so there is a good chance i am being a muppet. I have written a few simple apps to try things and my most recent one has been testing the gamecentre classes / functionality.
i have linked ok to leaderboards and achievements - but i can't get challenges working.
I have added the following code.... which is in my .m
GKLeaderboard *query = [[GKLeaderboard alloc] init];
query.category = LoadLeaderboard;
query.playerScope = GKLeaderboardPlayerScopeFriendsOnly;
query.range = NSMakeRange(1,100);
[query loadScoresWithCompletionHandler:^(NSArray *scores, NSError *error)
{NSPredicate *filter = [NSPredicate predicateWithFormat:#"value < %qi", scoreint];
NSArray *lesserScores = [scores filteredArrayUsingPredicate:filter];
[self presentChallengeWithPreselectedScores: lesserScores];
}
];
this code is basically taken from apple, just replacing the variable names....
this however gives an error on
[self presentChallengeWithPreselectedScores: lesserScores];
error Implicit conversion of an Objective-C pointer to 'int64_t *' (aka 'long long *') is disallowed with ARC
LoadLeaderboard is defined as a string
scoreint is defined as integer, thought this may be issue as not int64_t but that does not seem to make a difference.
I am sure for someone who has any kind of a clue this is a straightforward fix. But i am struggling at the moment. So if anyone can be kind and help a fool in need it would be most appreciated
Thanks,
Matt
welcome to Stack Overflow. I don't know your implementation of presentChallengeWithPreselectedScores method so I can't tell (although it looks like the method is taking a 64 bit integer and you're trying to feed it an array).
There are two ways to issue challenges:
1 - This is the easier way - if you've successfully implemented leader boards and score posting to game center, the challenges work out of the box in iOS6, the user can always view the leader board - select a submitted score (or a completed achievement) and select "Challenge Friend".
2 - The second way is to build a friend picker and let the user issue challenges within your game. But considering you're new to objective-c and game center, it's not so easy. But for your reference here is how you do it:
when you submit a GKScore object for the leaderboards - you can retain and use that GKScore object (call it myScoreObject) like this:
[myScoreObject issueChallengeToPlayers:selectedFriends message:yourMessage];
where selectedFriends is an NSArray (the friend picker should generate this) - the message is optional and can be used only if you want to send a message to challenged friends.