Changing Philips Hue Brightness with a UISlider Objective C - objective-c

I am building an app that controls three philips hue RGB LED bulbs. I want to be able to change the brightness using the UISlider. Currently I have a UISlider that calls a method upon each change, however, this method far exceeds the philips hue bridge's 10 commands per second limitation. Here is the method I call upon a change in the UI slider.
- (void) changeBulbBrightness: (NSNumber *)currentBrightness
{
NSTimeInterval timeInterval = [self.timeLastCommandSent timeIntervalSinceNow];
NSLog(#"Time Since Last command: %f", timeInterval);
if (timeInterval < -0.3)
{
NSLog(#"COMMAND SENT!!!!");
PHBridgeResourcesCache *cache = [PHBridgeResourcesReader readBridgeResourcesCache];
PHBridgeSendAPI *bridgeSendAPI = [[PHBridgeSendAPI alloc] init];
for (PHLight *light in cache.lights.allValues)
{
PHLightState *lightState = light.lightState;
//PHLightState *lightState = [[PHLightState alloc] init];
if (lightState.on)
{
[lightState setBrightness:currentBrightness];
// Send lightstate to light
[bridgeSendAPI updateLightStateForId:light.identifier withLightState:lightState completionHandler:^(NSArray *errors) {
/*if (errors != nil) {
NSString *message = [NSString stringWithFormat:#"%#: %#", NSLocalizedString(#"Errors", #""), errors != nil ? errors : NSLocalizedString(#"none", #"")];
if (self.loggingOn)
{
NSLog(#"Brightness Change Response: %#",message);
}
}
*/
}];
}
self.timeLastCommandSent = [[NSDate alloc]init];
}
}
self.appDelegate.currentSetup.brightnessSetting = currentBrightness;
NSLog(#"Brightness Now = %#", currentBrightness);
I've tried making a timer to limit the amount of commands to 10 per second but the bridge still acts the same way it does when it is overwhelmed with commands (stops acceptance of all commands). Any help or direction would be greatly appreciated. Thanks in advance!

One reason might be your multiple lights. You are sending an update command for each light. So if you have 3 bulbs connected as in the Hue starter kit, you might still send 10 or a little more if there is some unfortunate caching involved packing the updates from 2 seconds into 1 second of sending. Thus I suggest you further decrease the number of updates you are sending (try 0.5 or even 1.0) as an interval and see if it gets better.
Also note that the SDK is quite vague about the rate limit. It says:
If you stay roughly around 10 commands per second
Since the Philips Hue SDK is generally not that well supported (look at the open GitHub issues), take this with a grain of salt and do your own experiments. Once I have the time to check it myself, I will post an update here.
Update 1: I just discovered this remark by one of the contributors to the Hue SDK github repo (maybe Philips employee) advising to only send 2 commands per second:
As mentioned earlier, be careful when doing lot's of requests in a loop as the calls to PHBridgeSendAPI are not blocking and requests are retained until they get a response or timeout. Two calls per second seems like a safe rate, if you want to have a higher rate, it's advised to chain requests with a queuing mechanism of your own (to ensure memory will be released before new requests are getting called).

Related

Best way to do a massive data sync from HealthKit?

I am looking for a smart way to import ALL data for ALL time from HealthKit in a smart way for high volume users who log certain HealthKit values hundreds of times a day for months or years. I'm also looking to do this off the main thread and in a way that is robust to the fact that users can close the app whenever they feel like. Here's what my current implementation is:
- (void) fullHealthKitSyncFromEarliestDate {
dispatch_queue_t serialQueue = dispatch_queue_create("com.blah.queue", DISPATCH_QUEUE_SERIAL);
NSLog(#"fullHealthKitSyncFromEarliestDate");
NSDate *latestDate = [PFUser currentUser][#"earliestHKDate"];
if (!latestDate) latestDate = [NSDate date];
NSDate *earliestDate = [healthStore earliestPermittedSampleDate];
NSLog(#"earliest date %# and latest date %#", earliestDate, latestDate);
if ([earliestDate earlierDate:[[NSDate date] dateByAddingYears:-2]] == earliestDate) {
earliestDate = [[NSDate date] dateByAddingYears:-1];
}
__block NSDate *laterDate = latestDate;
__block NSDate *earlierDate = [latestDate dateByAddingMonths:-1];
int i = 0;
while ([earlierDate earlierDate:earliestDate] == earliestDate) {
// DISPATCH_QUEUE_PRIORITY_DEFAULT
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(i * 30 * NSEC_PER_SEC)), serialQueue, ^{
NSLog(#"calling sync from %# to %#", earlierDate, laterDate);
[self syncfromDate: earlierDate toDate:laterDate];
laterDate = [laterDate dateByAddingMonths:-1];
earlierDate = [laterDate dateByAddingMonths:-1];
earlierDate = [[earliestDate dateByAddingSeconds:1] laterDate:earlierDate];
});
}
}
I am having a few problems with the above code:
earlierDate is sometimes not getting updated for the next iteration. It's like there's some weird overlap that sometimes the same date range gets called twice or even more times.
Kernel memory allocation issues (as I retrieve objects I queue them for upload to a remote server)
Interface stops responding entirely even though this is all happening off the main queue. There may be something way down stream that gets called at the end of the entire query...but I'm still surprised the interface basically stops responding altogether. It usually does fine with similar syncing for short time periods. And here I'm just doing a month at a time in queries spaced 30 seconds apart
I realize without the details of additional code this may be too murky to address, but can people tell me what sort of solutions you have implemented for cases where you want to retrieve massive amounts of data from HealthKit. I realize I may get responses like 'that's what HealthKit is for...why are you duplicating all that data?'. Suffice it to say it's required and there's no way around it.
Also, am I just misunderstanding entirely how to use a serial queue?
Ideally I'd like to do the following: (1) Run only one query at a time in this for loop (2) and run them with some reasonable period of time between them, say 30 seconds apart, because at the end of each query I do need to do some interface updates on the main qeue (3) do all the queries on a background loop in such a way that it's not degrading interface performance and can be killed at any time the user decides he's gonna kill the app. That's what I thought I would achieve with the above...if folks can tell me where my threading is going wrong I'd really appreciate it. Thanks!

Changing setPreferredIOBufferDuration at Runtime results in Core Audio Error -50

I am writing an Audio Unit (remote IO) based app that displays waveforms at a given buffer size. The app initially starts off with a preferred buffer size of 0.0001 which results in very small buffer frame sizes (i think its 14 frames). Than at runtime I have a UI element that allows switching buffer frame sizes via AVAudioSession's method setPreferredIOBufferDuration:Error:.
Here is the code where the first two cases change from a smaller to a larger sized buffer. 3-5 are not specified yet. But the app crashes at AudioUnitRender with -50 error code.
- (void)setBufferSizeFromMode:(int)mode {
NSTimeInterval bufferDuration;
switch (mode) {
case 1:
bufferDuration = 0.0001;
break;
case 2:
bufferDuration = 0.001;
break;
case 3:
bufferDuration = 0.0; // reserved
break;
case 4:
bufferDuration = 0.0; // reserved
break;
case 5:
bufferDuration = 0.0; // reserved
break;
default:
break;
}
AVAudioSession *session = [AVAudioSession sharedInstance];
NSError * audioSessionError = nil;
[session setPreferredIOBufferDuration:bufferDuration error:&audioSessionError];
if (audioSessionError) {
NSLog(#"Error %ld, %#",
(long)audioSessionError.code, audioSessionError.localizedDescription);
}
}
Based on reading the CoreAudio and AVFoundation documentation, I was led to believe that you can change audio hardware configuration at runtime. There may be some gaps in audio or distortion but I am fine with that for now. Is there an obvious reason for this crash? Or must I reinitialize everything (my audio session, my audio unit, my audio buffers, etc..) for each change of the buffer duration?
Edit: I have tried calling AudioOutputUnitStop(self.myRemoteIO); before changing the session buffer duration and than starting again after it is set. I've also tried setting the session to inactive and than reactivating it but both result with the -50 OSStatus from AudioUnitRender() in my AU input callback.
A -50 error usually means the audio unit code is trying to set or use an invalid parameter value.
Some iOS devices don't support actual buffer durations below 5.3 mS (or 0.0058 seconds on older devices). And iOS devices appear free to switch to an actual buffer duration 4X longer than that, or even alternate slightly different values, at times not under the apps control.
The inNumberFrames is given to the audio unit callback as a parameter, your app can't arbitrarily specify that value.
If you want to process given buffer sizes, pull them out of an intermediating lock-free circular FIFO, which the audio unit callback can feed into.
Also: Try waiting a second or so after calling audio stop before changing parameters or restarting. There appears to be a delay between when you call stop, and when the hardware actually stops.

NSManagedObjectContext save fails when saving too many items in ios7?

I have the following two lines of code that works almost all the time:
NSError *error = nil;
BOOL isSuccessful =[self.tempMoc save:&error]; // tempMoc is a NSManagedObjectContext
This code works as expected on ios6 simulator, ios6 physical devices and ios7 simulator. The variable isSuccessful evaluates to Yes.
However, on ios7 physical devices, isSuccessful evaluates to NO. Why is that?
error is always nil in all four cases mentioned.
Does anyone know why this is the case and how I can get isSuccessful to evaluate to YES on ios7 physical devices?
ADDITIONAL DETAILS
After more debugging I noticed something . Prior to the tempMoc save above, I have this code running:
- (void)saveCompatibilities:(NSArray *)objects {
NSString *entityName = NSStringFromClass([Compatibility class]);
for (NSDictionary *newObjectDict in objects) {
Compatibility *object = [NSEntityDescription insertNewObjectForEntityForName:entityName inManagedObjectContext:self.tempMoc];
object.prod1 = newObjectDict[#"prod1"]; // value is just the letter a
object.prod2 = newObjectDict[#"prod2"]; // value is just the letter a
}
}
I noticed that if the number of iterations in the for loop is very large, like let's say 50 000 loops, then I encounter the ios7 isSuccessful == NO issue mentioned above. If it is only say 20 loops, then isSuccessful evaluates to yes. The number of loops that causes the ios7 isSuccessful failure is different with every single run.
I'm starting to think this is a memory issue with my device?
Sounds like a memory issue. Try saving periodically, or on memory pressure. You could also make a child context to save on a different thread.

Proper usage of CIDetectorTracking

Apple recently added a new constant to the CIDetector class called CIDetectorTracking which appears to be able to track faces between frames in a video. This would be very beneficial for me if I could manage to figure out how it works..
I've tried adding this key to the detectors options dictionary using every object I can think of that is remotely relevant including, my AVCaptureStillImageOutput instance, the UIImage I'm working on, YES, 1, etc.
NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy,myAVCaptureStillImageOutput,CIDetectorTracking, nil];
But no matter what parameter I try to pass, it either crashes (obviously I'm guessing at it here) or the debugger outputs:
Unknown CIDetectorTracking specified. Ignoring.
Normally, I wouldn't be guessing at this, but resources on this topic are virtually nonexistent. Apple's class reference states:
A key used to enable or disable face tracking for the detector. Use
this option when you want to track faces across frames in a video.
Other than availability being iOS 6+ and OS X 10.8+ that's it.
Comments inside CIDetector.h:
/*The key in the options dictionary used to specify that feature
tracking should be used. */
If that wasn't bad enough, a Google search provides 7 results (8 when they find this post) all of which are either Apple class references, API diffs, a SO post asking how to achieve this in iOS 5, or 3rd party copies of the former.
All that being said, any hints or tips for the proper usage of CIDetectorTracking would be greatly appreciated!
You're right, this key is not very well documented. Beside the API docs it is also not explained in:
the CIDetector.h header file
the Core Image Programming Guide
the WWDC 2012 Session "520 - What's New in Camera Capture"
the sample code to this session (StacheCam 2)
I tried different values for CIDetectorTracking and the only accepted values seem to be #(YES) and #(NO). With other values it prints this message in the console:
Unknown CIDetectorTracking specified. Ignoring.
When you set the value to #(YES) you should get tracking id's with the detected face features.
However when you want to detect faces in content captured from the camera you should prefer the face detection API in AVFoundation. It has face tracking built in and the face detection happens in the background on the GPU and will be much faster than CoreImage face detection
It requires iOS 6 and at least an iPhone 4S or iPad 2.
The face are sent as metadata objects (AVMetadataFaceObject) to the AVCaptureMetadataOutputObjectsDelegate.
You can use this code (taken from StacheCam 2 and the slides of the WWDC session mentioned above) to setup face detection and get face metadata objects:
- (void) setupAVFoundationFaceDetection
{
self.metadataOutput = [AVCaptureMetadataOutput new];
if ( ! [self.session canAddOutput:self.metadataOutput] ) {
return;
}
// Metadata processing will be fast, and mostly updating UI which should be done on the main thread
// So just use the main dispatch queue instead of creating a separate one
// (compare this to the expensive CoreImage face detection, done on a separate queue)
[self.metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
[self.session addOutput:self.metadataOutput];
if ( ! [self.metadataOutput.availableMetadataObjectTypes containsObject:AVMetadataObjectTypeFace] ) {
// face detection isn't supported (via AV Foundation), fall back to CoreImage
return;
}
// We only want faces, if we don't set this we would detect everything available
// (some objects may be expensive to detect, so best form is to select only what you need)
self.metadataOutput.metadataObjectTypes = #[ AVMetadataObjectTypeFace ];
}
// AVCaptureMetadataOutputObjectsDelegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputMetadataObjects:(NSArray *)metadataObjects
fromConnection:(AVCaptureConnection *)c
{
for ( AVMetadataObject *object in metadataObjects ) {
if ( [[object type] isEqual:AVMetadataObjectTypeFace] ) {
AVMetadataFaceObject* face = (AVMetadataFaceObject*)object;
CMTime timestamp = [face time];
CGRect faceRectangle = [face bounds];
NSInteger faceID = [face faceID];
CGFloat rollAngle = [face rollAngle];
CGFloat yawAngle = [face yawAngle];
NSNumber* faceID = #(face.faceID); // use this id for tracking
// Do interesting things with this face
}
}
If you want to display the face frames in the preview layer you need to get the transformed face object:
AVMetadataFaceObject * adjusted = (AVMetadataFaceObject*)[self.previewLayer transformedMetadataObjectForMetadataObject:face];
For details check out the sample code from WWDC 2012.

How to use ios 6 challenge in game centre

Firstly,
I am fairly new to objective c / xcode dev so there is a good chance i am being a muppet. I have written a few simple apps to try things and my most recent one has been testing the gamecentre classes / functionality.
i have linked ok to leaderboards and achievements - but i can't get challenges working.
I have added the following code.... which is in my .m
GKLeaderboard *query = [[GKLeaderboard alloc] init];
query.category = LoadLeaderboard;
query.playerScope = GKLeaderboardPlayerScopeFriendsOnly;
query.range = NSMakeRange(1,100);
[query loadScoresWithCompletionHandler:^(NSArray *scores, NSError *error)
{NSPredicate *filter = [NSPredicate predicateWithFormat:#"value < %qi", scoreint];
NSArray *lesserScores = [scores filteredArrayUsingPredicate:filter];
[self presentChallengeWithPreselectedScores: lesserScores];
}
];
this code is basically taken from apple, just replacing the variable names....
this however gives an error on
[self presentChallengeWithPreselectedScores: lesserScores];
error Implicit conversion of an Objective-C pointer to 'int64_t *' (aka 'long long *') is disallowed with ARC
LoadLeaderboard is defined as a string
scoreint is defined as integer, thought this may be issue as not int64_t but that does not seem to make a difference.
I am sure for someone who has any kind of a clue this is a straightforward fix. But i am struggling at the moment. So if anyone can be kind and help a fool in need it would be most appreciated
Thanks,
Matt
welcome to Stack Overflow. I don't know your implementation of presentChallengeWithPreselectedScores method so I can't tell (although it looks like the method is taking a 64 bit integer and you're trying to feed it an array).
There are two ways to issue challenges:
1 - This is the easier way - if you've successfully implemented leader boards and score posting to game center, the challenges work out of the box in iOS6, the user can always view the leader board - select a submitted score (or a completed achievement) and select "Challenge Friend".
2 - The second way is to build a friend picker and let the user issue challenges within your game. But considering you're new to objective-c and game center, it's not so easy. But for your reference here is how you do it:
when you submit a GKScore object for the leaderboards - you can retain and use that GKScore object (call it myScoreObject) like this:
[myScoreObject issueChallengeToPlayers:selectedFriends message:yourMessage];
where selectedFriends is an NSArray (the friend picker should generate this) - the message is optional and can be used only if you want to send a message to challenged friends.