Detecting motion gestures with siri remote tvOS - objective-c

I am developing an app for tvOS and I want when the user shakes the remote, or moves it in a downward slash, that an event gets triggerred. But apple's documentation mostly focuses on registering button presses and the focus engine.
Can anyone help me with how I can access the accelometer?
Thank you for your help

To use the motion sensing aspects of the Siri Remote, you need to treat it as a game controller. See Working with Game Controllers in App Programming Guide for tvOS and the GCMotion class.

While it's fairly easy to port an ios game to tvos, note the following limitation that did slow me down as originaly I was using the rotation feature and expected to be the same on the remote, I had overlooked it in the doc but it says "Although the remote supports motion data (and the GCMotion profile), the remote cannot determine the attitude or rotation of the remote. The corresponding properties always return constant values."
And the constant values as per the tvOS header GCMotion.h are:
#note Remotes can not determine a stable rotation rate so the values will be (0,0,0) at all times.
#note Remotes can not determine a stable attitude so the values will be (0,0,0,1) at all times.

Related

Is it possible to set Sony A7S focus distance from a software remote?

I want to be able to manually set the focus distance on the camera by using a software remote of some kind. I will know a distance to an object I want to focus on but cannot use auto focus because objects will be passing in front of the camera.
Here are two possible remotes I have looked at.
1) Sony Remote Camera Control
http://esupport.sony.com/US/p/swu-download.pl?mdl=ILCE7&upd_id=9294&os_group_id=6
2) Camera Remote API
https://developer.sony.com/develop/cameras/
Neither provide a way to set the focus distance that I can see. Is this even possible or just not documented anywhere?
That is not possible using the Camera Remote API.

Detecting what part of screen/application window is using OpenGL?

I am interested in identifying which part of application is making use of OpenGL.
Take an example of Chrome where Youtube video being played in Flash (get rendered via OpenGL). I am interested in detecting only the area of application where that OpenGL activity is being done?
If the condition is that I need to be inside the application, like say to inject in Chrome, I can do that too.
Let me know if I can clarify question more.
You tagged your question as MacOS X. Then you can simply assume everything on screen being drawn using OpenGL, because OpenGL is used as the graphics backend for the whole system.
Their is this private API which allow you to know the surface on which opengl is rendering.
CG_EXTERN CGError CGSGetSurfaceBounds(CGSConnectionID, CGWindowID, CGSSurfaceID, CGRect* bounds);
Using this we can detect specific area of application which makes use of OpenGL.

AVAssetWriter fails when Application enters Background during Rendering

In my app I am rendering a Video generated from images I retrieve from the users photos. I have set up an AVAssetwriter with a AVAssetwriterInput has an AVAssetWriterInputPixelBufferAdaptor. I'm able to transform the ALAsset objects I retrieve from the users library to CVPixelBuffers and add them to the Video, which then is saved as an mp4. Adding all the images to the video is done on a background thread which sends a Notification to the main thread every frame, so the interface can be updated. All this works well, and I get a usable Movie file out of the app.
My problem now is, that when the user enters another application, after becoming active again the status of the ALAssetWriter changes to "failed", an I am not able to add any more images to the movie file. First I thought I might have to end the current Session on the writer and reopen a new one, once the app has become active again, but that doesn't seem to help.
I was just wondering how the general approach would be when I'd like the user to enter other applications. The best solution would be, if the rendering could continue in the background. I suppose I'd need a background thread from the UIApplication. But for now I'd be happy, if rendering could just continue after resuming my app.
I won't post any code for now, because it's really a lot, and my question possibly is conceptual. If you need to see code, I'll post it.
Edit 1:
Tested on iOS 4.3 and iOS 5. I've seen background rendering on other apps such as iTimelapse, but I'm not sure which frameworks they use.
Edit2:
I now have the information of an apple devforum member, that AVAssetWriter does not work in the background. So is there any other framework out there capable of rendering quicktime videos?
Turns out that AVAssetWriter just won't survive the app being suspended. You can add an extra 10 Minutes of rendertime by requesting background time, but after that the AssetWriter fails. Same happens if you use certain services on the phone. For example making or answering a call will make the AVAssetWriter fail as well.
If there are any OpenGL calls made when your app is in the background then that would explain this behaviour, looks fairly likely. From the OpenGL ES Pragramming Guide
Background Applications May Not Execute Commands on the Graphics Hardware
An OpenGL ES application is terminated if it attempts to execute
OpenGL ES commands on the graphics hardware. This not only refers to
calls made to OpenGL ES while your application is in the background,
but also refers to previously submitted commands that have not yet
completed. The main reason for preventing background applications from
processing OpenGL ES commands is to make the graphics processor
completely available to the frontmost application. The frontmost
application should always present a great experience to the user.
Allowing background applications to hog the graphics processor might
prevent that. Your application must ensure that all previously
submitted commands have been finished prior to moving into the
background.
The docs go on to enumerate a set of guidelines for the enter background/foreground app delegate callbacks. I think finding a way to do the rendering without the graphics hardware would be tricky, also the frameworks that allow mp4 encoding (like ffMpeg) are mainly GPL/LGPL, so you need to be careful if dealing with a commercial product (LGPL means you can link to the library dynamically, not statically, which is useless on iOS), as the license would propagate to your code.

Extending Functionality of Magic Mouse: Do I Need a kext?

I recently purchased a Magic Mouse. It is fantastic and full of potential. Unfortunately, it is seriously hindered by the software support. I want to fix that. I have done quite a lot of research and these are my findings regarding the event chain thus far:
The Magic Mouse sends full multitouch events to the system.
Multitouch events are processed in the MultitouchSupport.framework (Carbon)
The events are interpreted in the framework and sent up to the system as normal events
When you scroll with one finger it sends actual scroll wheel events.
When you swipe with two fingers it sends a swipe event.
No NSTouch events are sent up to the system. You cannot use the NSTouch API to interact with the mouse.
After I discovered all of the above, I diassembled the MultitouchSupport.framework file and, with some googling, figured out how to insert a callback of my own into the chain so I would receive the raw touch event data. If you enumerate the list of devices, you can attach for each device (trackpad and mouse). This finding would enable us to create a framework for using multitouch on the mouse, but only in a single application. See my post here: Raw Multitouch Tracking.
I want to add new functionality to the mouse across the entire system, not just a single app.
In an attempt to do so, I figured out how to use Event Taps to see if the lowest level event tap would allow me to get the raw data, interpret it, and send up my own events in its place. Unfortunately, this is not the case. The event tap, even at the HID level, is still a step above where the input is being interpreted in MultitouchSupport.framework.
See my event tap attempt here: Event Tap - Attempt Raw Multitouch.
An interesting side note: when a multitouch event is received, such as a swipe, the default case is hit and prints out an event number of 29. The header shows 28 as being the max.
On to my question, now that you have all the information and have seen what I have tried: what would be the best approach to extending the functionality of the Magic Mouse? I know I need to insert something at a low enough level to get the input before it is processed and predefined events are dispatched. So, to boil it down to single sentence questions:
Is there some way to override the default callbacks used in MultitouchSupport.framework?
Do I need to write a kext and handle all the incoming data myself?
Is it possible to write a kext that sits on top of the kext that is handling the input now, and filters it after that kext has done all the hard work?
My first goal is to be able to dispatch a middle button click event if there are two fingers on the device when you click. Obviously there is far, far more that could be done, but this seems like a good thing to shoot for, for now.
Thanks in advance!
-Sastira
How does what is happening in MultitouchSupport.framework differ between the Magic Mouse and a glass trackpad? If it is based on IOKit device properties, I suspect you will need a KEXT that emulates a trackpad but actually communicates with the mouse. Apple have some documentation on Darwin kernel programming and kernel extensions specifically:
About Kernel Extensions
Introduction to I/O Kit Device Driver Design Guidelines
Kernel Programming Guide
(Personally, I'd love something that enabled pinch magnification and more swipe/button gestures; as it is, the Magic Mouse is a functional downgrade from the Mighty Mouse's four buttons and [albeit ever-clogging] 2D scroll wheel. Update: last year I wrote Sesamouse to do just that, and it does NOT need a kext (just a week or two staring at hex dumps :-) See my other answer for the deets and source code.)
Sorry I forgot to update this answer, but I ended up figuring out how to inject multitouch and gesture events into the system from userland via Quartz Event Services. I'm not sure how well it survived the Lion update, but you can check out the underlying source code at https://github.com/calftrail/Touch
It requires two hacks: using the private Multitouch framework to get the device input, and injecting undocumented CGEvent structures into Quartz Event Services. It was incredibly fun to figure out how to pull it off, but these days I recommend just buying a Magic Trackpad :-P
I've implemented a proof-of-concept of userspace customizable multi-touch events wrapper.
You can read about it here: http://aladino.dmi.unict.it/?a=multitouch (see in WaybackMachine)
--
all the best
If you get to that point, you may want to consider the middle click being three fingers on the mouse instead of two. I've thought about this middle click issue with the magic mouse and I notice that I often leave my 2nd finger on the mouse even though I am only pressing for a left click. So a "2 finger" click might be mistaken for a single left click, and it would also require the user more effort in always having to keep the 2nd finger off the mouse. Therefor if it's possible to detect, three fingers would cause less confusion and headaches. I wonder where the first "middle button click" solution will come from, as I am anxious for my middle click Expose feature to return :) Best of luck.

Apple Magic Mouse Api

I just bought a Magic Mouse and I like it pretty much. But as a Mac Developer it's even cooler. But there's one problem: is there already an API available for it? I want to use it for one of my applications. For, example, detect the user's finger positions, swipe or stretch gestures etc...
Does anyone know if there's an API for it (and how to use it)?
The Magic Mouse does not use the NSTouch API. I have been experimenting with it and attempting to capture touch information. I've had no luck so far. The only touch method that is common to both the mouse and the trackpad is the swipeWithEvent: method. It is called for a two finger swipe on the device only.
It seems the touch input from the mouse is being interpreted somewhere else, then forwarded on to the public API. I have yet to find the private API that is actually doing the work.
get a look here: http://www.iphonesmartapps.org/aladino/?a=multitouch
there's a full working proof-of-concept using the CGEventPost method.
--
all the best!
I have not tested, but I would be shocked if it didn't use NSTouch. NSTouch is the API you use to interact with the multi-touch trackpads on current MacBook Pros (and the new MacBooks that came out this week). You can check out the LightTable sample project to see how it is used.
It is part of AppKit, but it is a Snow Leopard only API.
I messed around with the below app before getting my magic mouse. I was surprised to find that the app also tracked the multi touch points on the mouse.
There is a link in the comments to some source that gets the raw data similarly, but there is no source to this actual app.
http://lericson.blogg.se/code/2009/november/multitouch-on-unibody-macbooks.html