How to find out type of manipulation in windows store app - xaml

I'm handling ManipulationCompleted event in my control in windows rt application.
But ManipulationCompletedRoutedEventArgs has no information about type of manipulation executed. How to find out was it a pinch or something else?

It depends what specifically you'd like to find out. The Cumulative property shows what was done overall in the manipulation and so the Scale field will tell you if scaling happened which is a result of a pinch gesture. Really though you should be handling ManipulationDelta and immediately responding to each delta event. ManipulationCompleted is where you'd perhaps run a snap animation or something of that sort. For more detailed information about where each finger touches the screen you could look at the Pointer~ events.

Related

Is there a way to get push to scroll functionality in Windows 8 metro Apps?

In the Windows 8 Consumer Preview, moving the mouse towards the left or right edge in the start screen causes the content to scroll.
The standard controls (and currently released preview apps) does not seem to support this.
Is there a way to make this work?
I asked this question at TechEd North America this year, after one of the sessions given by Paul Gusmorino, a lead program manager for the UI platform.
His answer was that no, apps can't do push-against-the-edge-to-scroll. WinJS and WinRT/XAML apps don't even get the events you would need to implement it yourself. Apps get events at the level of the mouse pointer, and once the mouse pointer hits the edge of the screen, it can't move any farther and you don't get any more events. (Well, it might wiggle up and down a little bit, but not if it hit a corner. At any rate, it's not good enough to scroll the way the Start screen does.)
He mentioned that, if you were writing a C++/DirectX app, you would be able to get the raw mouse input you needed to do this yourself -- you can get low-level "device moved by DX,DY" rather than the high-level "pointer moved to X,Y". I'm guessing this is how the Start screen does it, though I didn't think to ask.
But no, it's not built-in, it's not something you can implement yourself (unless you write your app in low-level C++/DirectX), and it sounds like they have no plans to add it before Windows 8 ships.
Personally, I think it's pretty short-sighted of them to have apps feel cripped compared to the Start screen, but evidently they're not concerned about little things like usability. </rant>
You can do the following to get information on mouse moving beyond the screen and use the delta information to scroll your content.
using Windows.Devices.Input;
var mouse = MouseDevice.GetForCurrentView();
mouse.MouseMoved += mouse_MouseMoved;
private void mouse_MouseMoved(MouseDevice sender, MouseEventArgs args)
{
tb.Text = args.MouseDelta.X.ToString();
}

Repeating NSTimer locks UI thread

First of all, I know there are a few other StackOverflow questions about this subject, but I have read them all and I still am confused about what to do for my situation. I'm probably missing something obvious here, if you could help clarify that would be much appreciated!
I have a app which is doing a lot of work to animate images within a view - mainly comprised of a number of images moving in straight lines for a second or two at a time. I considered at first making them all simple, once off animations using UIView animateWithDuration for the whole duration of the movement. But I found that didn't give me a lot of power to intercept the movement or stop it or check where it was up to, so I scrapped that. My new approach is to use an NSTimer, firing 20 times per second, doing incremental movements. This way I also can intervene (almost) instantly to change the animation or stop it or update a status label based on how far through it is, etc, etc.
First of all...there probably is a better way than this. Feel free to suggest something better!
Assuming this is acceptable though, my issue now is that while these animations are happening, I can't click any of the other controls on the UI. I get no response. It's not like it's just slow or delayed either - the click never comes through. It seems that the NSTimer processing totally locks the UI - but only from new interactions. Changes I make to the UI within the timer processing method happen just fine, and are very snappy.
From what I've read this shouldn't happen. However I also saw a comment on this question saying that if the timer processing is intensive then it could lock the UI thread. I don't see my processing to be that intensive here - certainly no resource requests, just a bit of data manipulating and animating some objects - but I could be underplaying it.
What are my options here? At first I thought I might create a new thread to kick off the timer. But I remember reading that the UI updates have to happen on the main thread anyway. Why is this? And plus, would that really solve the issue? Am I just asking too much of the device to process this timer as well as UI interactions? Is there something else I'm missing?
Any and all advice would be appreciated.
Edit:
I've just found the cause of my UI blocking problem. I was using the animateWithDuration with blocks, but was not setting the options. Therefore UIViewAnimationOptionAllowUserInteraction was not set. I changed it to set this option and my UI is happily responding now.
That said, I'll still leave this question open for specific suggestions regarding my overall approach. Thanks.
I would consider using CADisplayLink. From the documentation:
A CADisplayLink object is a timer object that allows your application to synchronize its drawing to the refresh rate of the display.
Your application creates a new display link, providing a target object and a selector to be called when the screen is updated. Next, your application adds the display link to a run loop.
Once the display link is associated with a run loop, the selector on the target is called when the screen’s contents need to be updated. The target can read the display link’s timestamp property to retrieve the time that the previous frame was displayed. For example, an application that displays movies might use the timestamp to calculate which video frame will be displayed next. An application that performs its own animations might use the timestamp to determine where and how displayed objects appear in the upcoming frame. The duration property provides the amount of time between frames. You can use this value in your application to calculate the frame rate of the display, the approximate time that the next frame will be displayed, and to adjust the drawing behavior so that the next frame is prepared in time to be displayed.
Your application can disable notifications by setting the paused property to YES. Also, if your application cannot provide frames in the time provided, you may want to choose a slower frame rate. An application with a slower but consistent frame rate appears smoother to the user than an application that skips frames. You can increase the time between frames (and decrease the apparent frame rate) by changing the frameInterval property.
When your application finishes with a display link, it should call invalidate to remove it from all run loops and to disassociate it from the target.
CADisplayLink should not be subclassed.
I'm not totally sure how everything is handled in your program, but you might want to just consider having one thread/timer that controls all of the objects and their movements. There's really no need to create a separate thread/timer for every single object, as that will easily cause problems.
You can just create a class for your moving items with some variables that contain information about their direction, speed, duration, etc, and then have a controlling thread/timer calculate and move the objects. You can then intervene onto the one main controller object instead of having to deal with many other objects.
I think you'll find that even if you optimize this, timer based animation like this is not going to perform well.
You might want to ask about the specific things that you think you couldn't do with CoreAnimation. If you solve those issues, you'll end up with a much better result than trying to roll your own.

I want to animate the movement of a foreign OS X app's window

Background: I recently got two monitors and want a way to move the focused window to the other screen and vice versa. I've achieved this by using the Accessibility API. (Specifically, I get an AXUIElementRef that holds the AXUIElement associated with the focused window, then I set the NSAccessibilityPositionAttribute value to move the window.
I have this working almost exactly the way I want it to, except I want to animate the movement of windows. I thought that if I could get the NSWindow somehow, I could get its layer and use CoreAnimation to animate the window movement.
Unfortunately, I found out that this isn't possible. (Correct me I'm wrong though -- if there's a way to do it this way it'd be great!) So I'm asking you all for help. How should I go about animating the movement of the focused window, if I have access to the AXUIElementRef?
-R
--EDIT
I was able to get a crude animation going by creating a while loop and moving the position of the window by a small amount each time to make a successful animation. However, the results are pretty sub-par. As you can guess, it takes a lot of unnecessary processing power, and is still very choppy. There must be a better way.
The best possible way I can imagine would be to perform some hacky property comparison between the AXUIElement info values for the window and the info returned from the CGWindow api. Once you're able to ascertain what windows in the CGWindow API match AXUIElementRefs, you could grab bitmaps of the current window contents, overlay the screen with your own custom animation draw of the faux windows, then as you drop the overlay set the real AXUIElementRef's to the desired-end-animation positions.
Hacky, tho.

Extending Functionality of Magic Mouse: Do I Need a kext?

I recently purchased a Magic Mouse. It is fantastic and full of potential. Unfortunately, it is seriously hindered by the software support. I want to fix that. I have done quite a lot of research and these are my findings regarding the event chain thus far:
The Magic Mouse sends full multitouch events to the system.
Multitouch events are processed in the MultitouchSupport.framework (Carbon)
The events are interpreted in the framework and sent up to the system as normal events
When you scroll with one finger it sends actual scroll wheel events.
When you swipe with two fingers it sends a swipe event.
No NSTouch events are sent up to the system. You cannot use the NSTouch API to interact with the mouse.
After I discovered all of the above, I diassembled the MultitouchSupport.framework file and, with some googling, figured out how to insert a callback of my own into the chain so I would receive the raw touch event data. If you enumerate the list of devices, you can attach for each device (trackpad and mouse). This finding would enable us to create a framework for using multitouch on the mouse, but only in a single application. See my post here: Raw Multitouch Tracking.
I want to add new functionality to the mouse across the entire system, not just a single app.
In an attempt to do so, I figured out how to use Event Taps to see if the lowest level event tap would allow me to get the raw data, interpret it, and send up my own events in its place. Unfortunately, this is not the case. The event tap, even at the HID level, is still a step above where the input is being interpreted in MultitouchSupport.framework.
See my event tap attempt here: Event Tap - Attempt Raw Multitouch.
An interesting side note: when a multitouch event is received, such as a swipe, the default case is hit and prints out an event number of 29. The header shows 28 as being the max.
On to my question, now that you have all the information and have seen what I have tried: what would be the best approach to extending the functionality of the Magic Mouse? I know I need to insert something at a low enough level to get the input before it is processed and predefined events are dispatched. So, to boil it down to single sentence questions:
Is there some way to override the default callbacks used in MultitouchSupport.framework?
Do I need to write a kext and handle all the incoming data myself?
Is it possible to write a kext that sits on top of the kext that is handling the input now, and filters it after that kext has done all the hard work?
My first goal is to be able to dispatch a middle button click event if there are two fingers on the device when you click. Obviously there is far, far more that could be done, but this seems like a good thing to shoot for, for now.
Thanks in advance!
-Sastira
How does what is happening in MultitouchSupport.framework differ between the Magic Mouse and a glass trackpad? If it is based on IOKit device properties, I suspect you will need a KEXT that emulates a trackpad but actually communicates with the mouse. Apple have some documentation on Darwin kernel programming and kernel extensions specifically:
About Kernel Extensions
Introduction to I/O Kit Device Driver Design Guidelines
Kernel Programming Guide
(Personally, I'd love something that enabled pinch magnification and more swipe/button gestures; as it is, the Magic Mouse is a functional downgrade from the Mighty Mouse's four buttons and [albeit ever-clogging] 2D scroll wheel. Update: last year I wrote Sesamouse to do just that, and it does NOT need a kext (just a week or two staring at hex dumps :-) See my other answer for the deets and source code.)
Sorry I forgot to update this answer, but I ended up figuring out how to inject multitouch and gesture events into the system from userland via Quartz Event Services. I'm not sure how well it survived the Lion update, but you can check out the underlying source code at https://github.com/calftrail/Touch
It requires two hacks: using the private Multitouch framework to get the device input, and injecting undocumented CGEvent structures into Quartz Event Services. It was incredibly fun to figure out how to pull it off, but these days I recommend just buying a Magic Trackpad :-P
I've implemented a proof-of-concept of userspace customizable multi-touch events wrapper.
You can read about it here: http://aladino.dmi.unict.it/?a=multitouch (see in WaybackMachine)
--
all the best
If you get to that point, you may want to consider the middle click being three fingers on the mouse instead of two. I've thought about this middle click issue with the magic mouse and I notice that I often leave my 2nd finger on the mouse even though I am only pressing for a left click. So a "2 finger" click might be mistaken for a single left click, and it would also require the user more effort in always having to keep the 2nd finger off the mouse. Therefor if it's possible to detect, three fingers would cause less confusion and headaches. I wonder where the first "middle button click" solution will come from, as I am anxious for my middle click Expose feature to return :) Best of luck.

Display something on the screen everytime action made

I have a problem not sure how to solve this. Hmm I am developing a game, a multi touch game, I already can make everything working fine, except a small issue that I want to show messages on the playing screen, each time the player makes actions. like his finger moves right the message says : "this finger moving right" nicely at the bottom of the screen, then if the finger move left, then it says the his finger moves left... something like that, can anyone show me how. I am using Cocos2D , it shall be much easier in Cocoa.
Thanks a alot for any help.
You'll probably need to be more specific with your question, but for now, here's a general answer:
Handling touch events on the iPhone and Handling touch ("trackpad") events on the Mac.
You'll receive and process the events per the above, then you'll display the results somehow. For testing, you'll probably just want to log the results to the console. For the final version, you might have a label or even a custom view that draws the "instruction" in some fancier way. If the latter is the case, you'll want to read up on custom views and drawing for whichever platform you're using (or both).