Writing Enhance Function - objective-c

I'm building an "enhance" function for a crime-fighting project I'm working on. The end goal is to be able to do things like in this documentary video: https://www.youtube.com/watch?v=Vxq9yj2pVWk
I have it mostly working but I'm getting hung up on the part where you speak into the computer mic and it just does what you want it to do. Here's my function thus far:
static const NSInteger kLotsX = MAXFLOAT;
static const NSInteger kLotsY = MAXFLOAT;
- (void)letsEnhance:(UIImageView *)imageView {
[imageView setTransform:CGAffineTransformMakeScale(kLotsX, kLotsY)];
// Fill in missing pixels / image-data
}
(Posted April 1st.)

Related

Can't use Sphero collision detection with Swift?

I'm new to Swift and Sphero development but I've been asked to do a game based on collisions with the Sphero.
I've managed to implement the driving part without problems so far, but I'm having problems with collisions.
I've been looking for code examples and similar issues all over the Internet but everything I've found is based in other languages like JAVA or ObjectiveC.
The code provided by Sphero's official page is the following:
**Enable collision detection**
robot.enableCollisions(true)
robot.sendCommand(RKConfigureCollisionDetectionCommand(forMethod: .Method3, xThreshold: 50, xSpeedThreshold: 30, yThreshold: 200, ySpeedThreshold: 0, postTimeDeadZone: 0.2))
**Handle Async Messages on collision**
func handleAsyncMessage(message: RKAsyncMessage!, forRobot robot: RKRobotBase!) {
if let collisionMessage = message as? RKCollisionDetectedAsyncData {
// handleCollisionDetected
}
}
I've tried this in many ways, but when executed it won't send any command or even access the handleAsyncMessage method, so I'm starting to think this code is not implemented for Swift.
These doubts were intensified when I found that the collision streaming method was implemented somewhere in the official page for ObjectiveC, but for Swift I could only find //Coming Soon!.
Collisions
[_robot sendCommand:[[RKConfigureCollisionDetectionCommand alloc]
initForMethod:RKCollisionDetectionMethod3
xThreshold:50 xSpeedThreshold:30 yThreshold:200 ySpeedThreshold:0 postTimeDeadZone:.2]];
...
- (void)handleAsyncMessage:(RKAsyncMessage *)message forRobot:(id<RKRobotBase>)robot {
if( [message isKindOfClass:[RKCollisionDetectedAsyncData class]]) {
RKCollisionDetectedAsyncData *collisionAsyncData = (RKCollisionDetectedAsyncData *) message;
float impactAccelX = [collisionAsyncData impactAcceleration].x;
float impactAccelY = [collisionAsyncData impactAcceleration].y;
float impactAccelZ = [collisionAsyncData impactAcceleration].z;
float impactAxisX = [collisionAsyncData impactAxis].x;
float impactAxisY = [collisionAsyncData impactAxis].y;
float impactPowerX = [collisionAsyncData impactPower].x;
float impactPowerY = [collisionAsyncData impactPower].y;
float impactSpeed = [collisionAsyncData impactSpeed];
}
}
Should I change the language to ObjectiveC or do you guys know any way to implement this using Swift?
Thank you in advance.
This SDK is written in Objective-C; Swift works through Objective-C interoperability built into Swift. Everything should work regardless of the language you choose. It looks like you might be missing the response observer. On the robot you call robot.addResponseObserver(self) making sure you implement the RKResponseObserver protocol.

Microsoft Kinect and background/environmental noise

I am currently programming with the Microsoft Kinect for Windows SDK 2 on Windows 8.1. Things are going well, and in a home dev environment obviously there is not much noise in the background compared to the 'real world'.
I would like to seek some advice from those with experience in 'real world' applications with the Kinect. How does Kinect (especially v2) fare in a live environment with passers-by, onlookers and unexpected objects in the background? I do expect, in the space from the Kinect sensor to the user there will usually not be interference however - what I am very mindful of right now is the background noise as such.
While I am aware that the Kinect does not track well under direct sunlight (either on the sensor or the user) - are there certain lighting conditions or other external factors I need to factor into the code?
The answer I am looking for is:
What kind of issues can arise in a live environment?
How did you code or work your way around it?
Outlaw Lemur has descibed in detail most of the issues you may encounter in real-world scenarios.
Using Kinect for Windows version 2, you do not need to adjust the motor, since there is no motor and the sensor has a larger field of view. This will make your life much easier.
I would like to add the following tips and advice:
1) Avoid direct light (physical or internal lighting)
Kinect has an infrared sensor that might be confused. This sensor should not have direct contact with any light sources. You can emulate such an environment at your home/office by playing with an ordinary laser pointer and torches.
2) If you are tracking only one person, select the closest tracked user
If your app only needs one player, that player needs to be a) fully tracked and b) closer to the sensor than the others. It's an easy way to make participants understand who is tracked without making your UI more complex.
public static Body Default(this IEnumerable<Body> bodies)
{
Body result = null;
double closestBodyDistance = double.MaxValue;
foreach (var body in bodies)
{
if (body.IsTracked)
{
var position = body.Joints[JointType.SpineBase].Position;
var distance = position.Length();
if (result == null || distance < closestBodyDistance)
{
result = body;
closestBodyDistance = distance;
}
}
}
return result;
}
3) Use the tracking IDs to distinguish different players
Each player has a TrackingID property. Use that property when players interfere or move at random positions. Do not use that property as an alternative to face recognition though.
ulong _trackinfID1 = 0;
ulong _trackingID2 = 0;
void BodyReader_FrameArrived(object sender, BodyFrameArrivedEventArgs e)
{
using (var frame = e.FrameReference.AcquireFrame())
{
if (frame != null)
{
frame.GetAndRefreshBodyData(_bodies);
var bodies = _bodies.Where(b => b.IsTracked).ToList();
if (bodies != null && bodies.Count >= 2 && _trackinfID1 == 0 && _trackingID2 == 0)
{
_trackinfID1 = bodies[0].TrackingId;
_trackingID2 = bodies[1].TrackingId;
// Alternatively, specidy body1 and body2 according to their distance from the sensor.
}
Body first = bodies.Where(b => b.TrackingId == _trackinfID1).FirstOrDefault();
Body second = bodies.Where(b => b.TrackingId == _trackingID2).FirstOrDefault();
if (first != null)
{
// Do something...
}
if (second != null)
{
// Do something...
}
}
}
}
4) Display warnings when a player is too far or too close to the sensor.
To achieve higher accuracy, players need to stand at a specific distance: not too far or too close to the sensor. Here's how to check this:
const double MIN_DISTANCE = 1.0; // in meters
const double MAX_DISTANCE = 4.0; // in meters
double distance = body.Joints[JointType.SpineBase].Position.Z; // in meters, too
if (distance > MAX_DISTANCE)
{
// Prompt the player to move closer.
}
else if (distance < MIN_DISTANCE)
{
// Prompt the player to move farther.
}
else
{
// Player is in the right distance.
}
5) Always know when a player entered or left the scene.
Vitruvius provides an easy way to understand when someone entered or left the scene.
Here is the source code and here is how to use it in your app:
UsersController userReporter = new UsersController();
userReporter.BodyEntered += UserReporter_BodyEntered;
userReporter.BodyLeft += UserReporter_BodyLeft;
userReporter.Start();
void UserReporter_BodyEntered(object sender, UsersControllerEventArgs e)
{
// A new user has entered the scene. Get the ID from e param.
}
void UserReporter_BodyLeft(object sender, UsersControllerEventArgs e)
{
// A user has left the scene. Get the ID from e param.
}
6) Have a visual clue of which player is tracked
If there are a lot of people surrounding the player, you may need to show on-screen who is tracked. You can highlight the depth frame bitmap or use Microsoft's Kinect Interactions.
This is an example of removing the background and keeping the player pixels only.
7) Avoid glossy floors
Some floors (bright, glossy) may mirror people and Kinect may confuse some of their joints (for example, Kinect may extend your legs to the reflected body). If you can't avoid glossy floors, use the FloorClipPlane property of your BodyFrame. However, the best solution would be to have a simple carpet where you expect people to stand. A carpet would also act as an indication of the proper distance, so you would provide a better user experience.
I created an application for home use like you have before, and then presented that same application in a public setting. The result was embarrassing for me, because there were many errors that I would never have anticipated within a controlled environment. However that did help me because it led me to add some interesting adjustments to my code, which is centered around human detection only.
Have conditions for checking the validity of a "human".
When I showed my application in the middle of a presentation floor with many other objects and props, I found that even chairs could be mistaken for people for brief moments, which led to my application switching between the user and an inanimate object, causing it to lose track of the user and lost their progress. To counter this or other false-positive human detections, I added my own additional checks for a human. My most successful method was comparing the proportions of a humans body. I implemented this measured in head units. (head units picture) Below is code of how I did this (SDK version 1.8, C#)
bool PersonDetected = false;
double[] humanRatios = { 1.0f, 4.0, 2.33, 3.0 };
/*Array indexes
* 0 - Head (shoulder to head)
* 1 - Leg length (foot to knee to hip)
* 2 - Width (shoulder to shoulder center to shoulder)
* 3 - Torso (hips to shoulder)
*/
....
double[] currentRatios = new double[4];
double headSize = Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.Head]);
currentRatios[0] = 1.0f;
currentRatios[1] = (Distance(skeletons[0].Joints[JointType.FootLeft], skeletons[0].Joints[JointType.KneeLeft]) + Distance(skeletons[0].Joints[JointType.KneeLeft], skeletons[0].Joints[JointType.HipLeft])) / headSize;
currentRatios[2] = (Distance(skeletons[0].Joints[JointType.ShoulderLeft], skeletons[0].Joints[JointType.ShoulderCenter]) + Distance(skeletons[0].Joints[JointType.ShoulderCenter], skeletons[0].Joints[JointType.ShoulderRight])) / headSize;
currentRatios[3] = Distance(skeletons[0].Joints[JointType.HipCenter], skeletons[0].Joints[JointType.ShoulderCenter]) / headSize;
int correctProportions = 0;
for (int i = 1; i < currentRatios.Length; i++)
{
diff = currentRatios[i] - humanRatios[i];
if (abs(diff) <= MaximumDiff)//I used .2 for my MaximumDiff
correctProportions++;
}
if (correctProportions >= 2)
PersonDetected = true;
Another method I had success with was finding the average of the sum of the joints distance squared from one another. I found that non-human detections had more variable summed distances, whereas humans are more consistent. The average I learned using a single dimensional support vector machine (I found user's summed distances were generally less than 9)
//in AllFramesReady or SkeletalFrameReady
Skeleton data;
...
float lastPosX = 0; // trying to detect false-positives
float lastPosY = 0;
float lastPosZ = 0;
float diff = 0;
foreach (Joint joint in data.Joints)
{
//add the distance squared
diff += (joint.Position.X - lastPosX) * (joint.Position.X - lastPosX);
diff += (joint.Position.Y - lastPosY) * (joint.Position.Y - lastPosY);
diff += (joint.Position.Z - lastPosZ) * (joint.Position.Z - lastPosZ);
lastPosX = joint.Position.X;
lastPosY = joint.Position.Y;
lastPosZ = joint.Position.Z;
}
if (diff < 9)//this is what my svm learned
PersonDetected = true;
Use player IDs and indexes to remember who is who
This ties in with the previous issue, where if Kinect switched the two users that it was tracking to others, then my application would crash because of the sudden changes in data. To counter this, I would keep track of both each player's skeletal index and their player ID. To learn more about how I did this, see Kinect user Detection.
Add adjustable parameters to adopt to varying situations
Where I was presenting, the same tilt angle and other basic kinect parameters (like near-mode) did not work in the new environment. Let the user be able to adjust some of these parameters so they can get the best setup for the job.
Expect people to do stupid things
The next time I presented, I had adjustable tilt, and you can guess whether someone burned out the Kinect's motor. Anything that can be broken on Kinect, someone will break. Leaving a warning in your documentation will not be sufficient. You should add in cautionary checks on Kinect's hardware to make sure people don't get upset when they break something inadvertently. Here is some code checking whether the user has used the motor more than 20 times in two minutes.
int motorAdjustments = 0;
DateTime firstAdjustment;
...
//in motor adjustment code
if (motorAdjustments == 0)
firstAdjustment = DateTime.Now;
++motorAdjustments;
if (motorAdjustments < 20)
{
//adjust the tilt
}
else
{
DateTime timeCheck = firstAdjustment;
if (DateTime.Now > timeCheck.AddMinutes(2))
{
//reset all variables
motorAdjustments = 1;
firstAdjustment = DateTime.Now;
//adjust the tilt
}
}
I would note that all of these were issues for me with the first version of Kinect, and I don't know how many of them have been solved in the second version as I sadly haven't gotten my hands on one yet. However I would still implement some of these techniques if not back-up techniques because there will be exceptions, especially in computer vision.

Adjusting screen contrast programmatically in Cocoa application

I'm trying to adjust the screen contrast in Objective C for a Cocoa application using kIODisplayContrastKey. I saw a post about adjusting screen brightness here:
Programmatically change Mac display brightness
- (void) setBrightnessTo: (float) level
{
io_iterator_t iterator;
kern_return_t result = IOServiceGetMatchingServices(kIOMasterPortDefault,
IOServiceMatching("IODisplayConnect"),
&iterator);
// If we were successful
if (result == kIOReturnSuccess)
{
io_object_t service;
while ((service = IOIteratorNext(iterator)))
{
IODisplaySetFloatParameter(service, kNilOptions, CFSTR(kIODisplayBrightnessKey), level);
// Let the object go
IOObjectRelease(service);
return;
}
}
}
Code by Alex King from the link above.
And that code worked. So I tried to do the same for contrast by using a different key (kIODisplayContrastKey), but that doesn't seem to work. Does anybody have an idea if that's possible?
I'm using 10.9.3

Pre-processing a loop in Objective-C

I am currently writing a program to help me control complex lights installations. The idea is I tell the program to start a preset, then the app has three options (depending on the preset type)
1) the lights go to one position (so only one group of data sent when the preset starts)
2) the lights follows a mathematical equation (ex: sinus with a timer to make smooth circles)
3) the lights respond to a flow of data (ex midi controller)
So I decided to go with an object I call the AppBrain, that receive data from the controllers and the templates, but also is able to send processed data to the lights.
Now, I come from non-native programming, and I kinda have trust issues concerning working with a lot of processing, events and timing; as well as troubles with understanding 100% the Cocoa logic.
This is where the actual question starts, sorry
What I want to do, would be when I load the preset, I parse it to prepare the timer/data receive event so it doesn't have to go trough every option for 100 lights 100 times per second.
To explain more deeply, here's how I would do it in Javascript (crappy pseudo code, of course)
var lightsFunctions = {};
function prepareTemplate(theTemplate){
//Let's assume here the template is just an array, and I won't show all the processing
switch(theTemplate.typeOfTemplate){
case "simpledata":
sendAllDataTooLights(); // Simple here
break;
case "periodic":
for(light in theTemplate.lights){
switch(light.typeOfEquation){
case "sin":
lightsFunctions[light.id] = doTheSinus; // doTheSinus being an existing function
break;
case "cos":
...
}
}
function onFrame(){
for(light in lightsFunctions){
lightsFunctions[light]();
}
}
var theTimer = setTimeout(onFrame, theTemplate.delay);
break;
case "controller":
//do the same pre-processing without the timer, to know which function to execute for which light
break;
}
}
}
So, my idea is to store the processing function I need in an NSArray, so I don't need to test on each frame the type and loose some time/CPU.
I don't know if I'm clear, or if my idea is possible/the good way to go. I'm mostly looking for algorithm ideas, and if you have some code that might direct me in the good direction... (I know of PerformSelector, but I don't know if it is the best for this situation.
Thanks;
I_
First of all, don't spend time optimizing what you don't know is a performance problem. 100 iterations of the type is nothing in the native world, even on the weaker mobile CPUs.
Now, to your problem. I take it you are writing some kind of configuration / DSL to specify the light control sequences. One way of doing it is to store blocks in your NSArray. A block is the equivalent of a function object in JavaScript. So for example:
typedef void (^LightFunction)(void);
- (NSArray*) parseProgram ... {
NSMutableArray* result = [NSMutableArray array];
if(...) {
LightFunction simpledata = ^{ sendDataToLights(); };
[result addObject:simpleData];
} else if(...) {
Light* light = [self getSomeLight:...];
LightFunction periodic = ^{
// Note how you can access the local scope of the outside function.
// Make sure you use automatic reference counting for this.
[light doSomethingWithParam:someParam];
};
[result addObject:periodic];
}
return result;
}
...
NSArray* program = [self parseProgram:...];
// To run your program
for(LightFunction func in program) {
func();
}

Best way to know if application is inactive in cocoa mac OSX?

So, i am building a program that will stand on a exhibition for public usage, and i got a task to make a inactive state for it. Just display some random videos from a folder on the screen, like a screensaver but in the application.
So what is the best and proper way of checking if the user is inactive?
What i am thinking about is some kind of global timer that gets reset on every user input and if it reaches lets say 1 minute it goes into inactive mode. Are there any better ways?
You can use CGEventSourceSecondsSinceLastEventType
Returns the elapsed time since the last event for a Quartz event
source.
/*
To get the elapsed time since the previous input event—keyboard, mouse, or tablet—specify kCGAnyInputEventType.
*/
- (CFTimeInterval)systemIdleTime
{
CFTimeInterval timeSinceLastEvent = CGEventSourceSecondsSinceLastEventType(kCGEventSourceStateHIDSystemState, kCGAnyInputEventType);
return timeSinceLastEvent;
}
I'm expanding on Parag Bafna's answer. In Qt you can do
#include <ApplicationServices/ApplicationServices.h>
double MyClass::getIdleTime() {
CFTimeInterval timeSinceLastEvent = CGEventSourceSecondsSinceLastEventType(kCGEventSourceStateHIDSystemState, kCGAnyInputEventType);
return timeSinceLastEvent;
}
You also have to add the framework to your .pro file:
QMAKE_LFLAGS += -F/System/Library/Frameworks/ApplicationServices.framework
LIBS += -framework ApplicationServices
The documentation of the function is here
I've found a solution that uses the HID manager, this seems to be the way to do it in Cocoa. (There's another solution for Carbon, but it doesn't work for 64bit OS X.)
Citing Daniel Reese on the Dan and Cheryl's Place blog:
#include <IOKit/IOKitLib.h>
/*
Returns the number of seconds the machine has been idle or -1 on error.
The code is compatible with Tiger/10.4 and later (but not iOS).
*/
int64_t SystemIdleTime(void) {
int64_t idlesecs = -1;
io_iterator_t iter = 0;
if (IOServiceGetMatchingServices(kIOMasterPortDefault,
IOServiceMatching("IOHIDSystem"),
&iter) == KERN_SUCCESS)
{
io_registry_entry_t entry = IOIteratorNext(iter);
if (entry) {
CFMutableDictionaryRef dict = NULL;
kern_return_t status;
status = IORegistryEntryCreateCFProperties(entry,
&dict,
kCFAllocatorDefault, 0);
if (status == KERN_SUCCESS)
{
CFNumberRef obj = CFDictionaryGetValue(dict,
CFSTR("HIDIdleTime"));
if (obj) {
int64_t nanoseconds = 0;
if (CFNumberGetValue(obj,
kCFNumberSInt64Type,
&nanoseconds))
{
// Convert from nanoseconds to seconds.
idlesecs = (nanoseconds >> 30);
}
}
CFRelease(dict);
}
IOObjectRelease(entry);
}
IOObjectRelease(iter);
}
return idlesecs;
}
The code has been slightly modified, to make it fit into the 80-character limit of stackoverflow.
This might sound like a silly question; but why not just set up a screensaver, with a short fuse?
You can listen for the NSNotification named #"com.apple.screensaver.didstart" if you need to do any resets or cleanups when the user wanders away.
Edit: You could also set up the screen saver; wait for it to fire, and then do your own thing when it starts, stopping the screen saver when you display your own videos; but setting up a screen saver the proper way is probably a good idea.
Take a look at UKIdleTimer, maybe it's what you're looking for.