How can one detect DocumentWindow movement and/or resizing? - dm-script

Past questions have dealt with detection of changes within the DigitalMicrograph UI such as closing of image windows or changes to ROIs, for which there is a good set of listener events available. Are there similar ways to detect the movement or resizing of DocumentWindow objects?

Yes, such messages exist for the documentWindow listener.
Similar to the window_closed message you can also use window_begin_drag, window_end_drag, window_move_or_size, window_updated and window_opened messages.
However, these event messages have been added since GMS 3.0 only.

Related

sony-camera-api trackingFocus

I can turn tracking Focus on and use the actTrackingFocus. Once the actTrackingFocus is set how can I get the coordinates back from the camera so I can draw a box in the Liveview box showing what the camera is focused on?
That is not possible with the existing API unfortunately.
Appreciate that this is an old question, but if you are still trying and OK playing in python...
The tracking focus location is (apparently) reported via the frame info packets, and thus you have to enable them and then decode.
We are attempting to do this with pysony 1
Use 'python src/example/pygameLiveView -i' to see the reported locations. You might need to add your 'actTrackingFocus()' call to enable tracking focus, but they should be rendered (box with triangle corners) on screen.
Since none of the devs have a camera which support tracking focus, we'd love to hear whether it works on not. :-)

How to find out type of manipulation in windows store app

I'm handling ManipulationCompleted event in my control in windows rt application.
But ManipulationCompletedRoutedEventArgs has no information about type of manipulation executed. How to find out was it a pinch or something else?
It depends what specifically you'd like to find out. The Cumulative property shows what was done overall in the manipulation and so the Scale field will tell you if scaling happened which is a result of a pinch gesture. Really though you should be handling ManipulationDelta and immediately responding to each delta event. ManipulationCompleted is where you'd perhaps run a snap animation or something of that sort. For more detailed information about where each finger touches the screen you could look at the Pointer~ events.

SwapChainBackgroundPanel not calling Rendering event when GPU picking - DirectX and XAML

I have already sort of asked this question already here (Previous Question) but it only got a handful of views and zero answers/comments so I thought I'd give it a go again with some more info that I've found.
I basically have a Windows Store DirectX + XAML app that I'm developing. I currently have the problem that the Rendering event of the SwapChainBackgroundPanel that I use for DirectX rendering (as per the Windows 8 example on MSDN) sometimes isn't called when the user is interacting with the app.
It will continue to update if I am doing something with the camera such as changing what it's looking at based on touch/mouse position but it won't be called if I am picking and I don't know why.
I use the standard GPU picking method (where I render the scene with a unique color for each object and then take a 1x1 texture of the press area to find the selected object) but when I am using this picking technique to select multiple objects (the user drags their finger/mouse over many objects) Rendering isn't being called. So in effect what happens is, lots of objects get selected but the user only sees this when they remove their finger/stop pressing the mouse button.
Is there any reason why this is happening? Is it because of the GPU picking method? And if so is there a way around it rather than using the ray-trace picking method (which considerably slows down picking for a large number of objects)?
Has anyone else had this problem? Is there an explanation from Microsoft anywhere that it is deliberate that rendering doesn't get called while this is happening?
Thanks for your time.

using kinect skeleton - no interest in wpf drawing

Good day,
I would like to take this opportunity to give my many thanks to the people of stackoverflow.com.
I have been new to coding, .net, over the past year, and I have always found stakoverflow to be a fantastic base of knowledge for learning. I spent the last couple weeks working, in depth, on a speech recognition project I am going to use with the upcoming release of Media Browser 3. Originally, the idea was to build a recognizer and have it control media. However as I moved through the different namespaces for speech recognition, it lead me into the realm of the Microsoft Kinect sensor. The more I use the kinect device, the more I would like to use some of the skeleton tracking it has to offer. Which leads me to my question.
I am not interested in build a WPF application that displays a window of what the kinect is seeing. This is part of a Form application, in which I would like to support only two of three gestures.
The idea is for it to watch for three gestures and simulate a key press on the keyboard.
So first I enable skeletonframe before the the audio for the recognizer, because I had read on here somewhere that enabling the skeleton after the audio canceled the audio for some reason.
Then I add some event handlers to my form.
I added skeletonFrameReady event.
I suppose my main questions would be, am I on the right track with skeleton tracking? Is it possible to do this from a form application without trying to draw th skeleton?
Thank you again,
I hope I made sense, sorry for my ignorance.
Ben
It is possible of course. For gesture recognition you can make a comparison between the positions of the joints (In the method that skeletonFrameReady event calls, which is called several times per second).
If you want to recognize complex gestures (like waving a hand), I suggest you take a look at this page http://blogs.msdn.com/b/mcsuksoldev/archive/2011/08/08/writing-a-gesture-service-with-the-kinect-for-windows-sdk.aspx and download the sample code there. (which is hidden in the last paragraph :)
Main idea is checking for predefined gesture segments with the correct order (If the segment1 is successful, look at segment2. If segment2 is paused, look at segment2 again until it is either successful or failed).
Hope this helps.

Extending Functionality of Magic Mouse: Do I Need a kext?

I recently purchased a Magic Mouse. It is fantastic and full of potential. Unfortunately, it is seriously hindered by the software support. I want to fix that. I have done quite a lot of research and these are my findings regarding the event chain thus far:
The Magic Mouse sends full multitouch events to the system.
Multitouch events are processed in the MultitouchSupport.framework (Carbon)
The events are interpreted in the framework and sent up to the system as normal events
When you scroll with one finger it sends actual scroll wheel events.
When you swipe with two fingers it sends a swipe event.
No NSTouch events are sent up to the system. You cannot use the NSTouch API to interact with the mouse.
After I discovered all of the above, I diassembled the MultitouchSupport.framework file and, with some googling, figured out how to insert a callback of my own into the chain so I would receive the raw touch event data. If you enumerate the list of devices, you can attach for each device (trackpad and mouse). This finding would enable us to create a framework for using multitouch on the mouse, but only in a single application. See my post here: Raw Multitouch Tracking.
I want to add new functionality to the mouse across the entire system, not just a single app.
In an attempt to do so, I figured out how to use Event Taps to see if the lowest level event tap would allow me to get the raw data, interpret it, and send up my own events in its place. Unfortunately, this is not the case. The event tap, even at the HID level, is still a step above where the input is being interpreted in MultitouchSupport.framework.
See my event tap attempt here: Event Tap - Attempt Raw Multitouch.
An interesting side note: when a multitouch event is received, such as a swipe, the default case is hit and prints out an event number of 29. The header shows 28 as being the max.
On to my question, now that you have all the information and have seen what I have tried: what would be the best approach to extending the functionality of the Magic Mouse? I know I need to insert something at a low enough level to get the input before it is processed and predefined events are dispatched. So, to boil it down to single sentence questions:
Is there some way to override the default callbacks used in MultitouchSupport.framework?
Do I need to write a kext and handle all the incoming data myself?
Is it possible to write a kext that sits on top of the kext that is handling the input now, and filters it after that kext has done all the hard work?
My first goal is to be able to dispatch a middle button click event if there are two fingers on the device when you click. Obviously there is far, far more that could be done, but this seems like a good thing to shoot for, for now.
Thanks in advance!
-Sastira
How does what is happening in MultitouchSupport.framework differ between the Magic Mouse and a glass trackpad? If it is based on IOKit device properties, I suspect you will need a KEXT that emulates a trackpad but actually communicates with the mouse. Apple have some documentation on Darwin kernel programming and kernel extensions specifically:
About Kernel Extensions
Introduction to I/O Kit Device Driver Design Guidelines
Kernel Programming Guide
(Personally, I'd love something that enabled pinch magnification and more swipe/button gestures; as it is, the Magic Mouse is a functional downgrade from the Mighty Mouse's four buttons and [albeit ever-clogging] 2D scroll wheel. Update: last year I wrote Sesamouse to do just that, and it does NOT need a kext (just a week or two staring at hex dumps :-) See my other answer for the deets and source code.)
Sorry I forgot to update this answer, but I ended up figuring out how to inject multitouch and gesture events into the system from userland via Quartz Event Services. I'm not sure how well it survived the Lion update, but you can check out the underlying source code at https://github.com/calftrail/Touch
It requires two hacks: using the private Multitouch framework to get the device input, and injecting undocumented CGEvent structures into Quartz Event Services. It was incredibly fun to figure out how to pull it off, but these days I recommend just buying a Magic Trackpad :-P
I've implemented a proof-of-concept of userspace customizable multi-touch events wrapper.
You can read about it here: http://aladino.dmi.unict.it/?a=multitouch (see in WaybackMachine)
--
all the best
If you get to that point, you may want to consider the middle click being three fingers on the mouse instead of two. I've thought about this middle click issue with the magic mouse and I notice that I often leave my 2nd finger on the mouse even though I am only pressing for a left click. So a "2 finger" click might be mistaken for a single left click, and it would also require the user more effort in always having to keep the 2nd finger off the mouse. Therefor if it's possible to detect, three fingers would cause less confusion and headaches. I wonder where the first "middle button click" solution will come from, as I am anxious for my middle click Expose feature to return :) Best of luck.