hammerjs pinch is eratic in ionic running on IOS13 - ionic4

The pinch function from hammerjs 2.0.8 gives eratic values with IOS13 in an IONIC4 application.
The 'scale' values in the triggered events appear random and no longer in line with pinch movement. All works fine with IOS12.
I have added handling for the 'pinchcancel' event. Before I only handled 'pinchend', but with IOS13 one should also handle the cancel event.
Without handing the cancel event the pinch operation will not be concluded. Handling the cancel event solves that, but not the eratic scale values.
<div #wrapper id="wrapper" (pinchstart)="onPinchStart($event)"
(pinch)="onPinch($event)" (pinchend)="onPinchEnd($event)" (pinchcancel)="onPinchEnd($event)">
</div>
addition October 4th 2019:
Debugging showed that the onPinch handler receives events with eventType 8 that are way too high. I have for now added a test to skip eventType 8. That makes it better, but the pinch itselfs tends to end prematurely when one pinches somewhat rude or quickly.
Addition October 8th 2019:
Appearantly IOS 13 supports PointerEvents, but HammerJS does not handle these correct causing the eratic behaviour.
For now I have disable PointerEvents in HammerJS's code. That solves my problem.

Related

How can one detect DocumentWindow movement and/or resizing?

Past questions have dealt with detection of changes within the DigitalMicrograph UI such as closing of image windows or changes to ROIs, for which there is a good set of listener events available. Are there similar ways to detect the movement or resizing of DocumentWindow objects?
Yes, such messages exist for the documentWindow listener.
Similar to the window_closed message you can also use window_begin_drag, window_end_drag, window_move_or_size, window_updated and window_opened messages.
However, these event messages have been added since GMS 3.0 only.

How to find out type of manipulation in windows store app

I'm handling ManipulationCompleted event in my control in windows rt application.
But ManipulationCompletedRoutedEventArgs has no information about type of manipulation executed. How to find out was it a pinch or something else?
It depends what specifically you'd like to find out. The Cumulative property shows what was done overall in the manipulation and so the Scale field will tell you if scaling happened which is a result of a pinch gesture. Really though you should be handling ManipulationDelta and immediately responding to each delta event. ManipulationCompleted is where you'd perhaps run a snap animation or something of that sort. For more detailed information about where each finger touches the screen you could look at the Pointer~ events.

wxPanel flickering/failure when window is inactive

Basically, I have a wxWidget application that implements OpenGL - the latter is displayed on a panel that can be updated through user input (clicking, dragging, etc), wxTimer, or events generated by external processes. The problem arises when focus is shifted to another window (whether an internal dialog box, or another application entirely) - the wxPanel ceases to update anywhere from immediately to after a few seconds, particularly if the other window is on top of it (sometimes, a small part of the panel that was obscured will still continue to update). Reactivating the application or resizing the window "unfreezes" the panel, and normal operation continues.
This is an issue I've always had in wxWidgets, be it with an OpenGL panel such as in this case, or otherwise. Generally, I've been able to get around it by making numerous SwapBuffer() calls in between a Freeze() and a Thaw() upon window refocusing, window resizing, or something similarly kludgy, but these all have the potential to produce flicker or other non-negligible visual artifacts, and additionally can affect performance if done every frame (such as when an animation needs to continue playing in an inactive window).
An indeterminate period of trial and error could probably produce something nice and kludgy for this also, but I'd really like to know, what is the "right" way to handle this problem? Many thanks in advance.
Here's a skeleton of the code for reference:
void MyGLCanvas::Draw(bool focus, int parentID) //This is what's called by the rest of the application
{
if (focus) { SetFocus(); }
SetCurrent();
wxClientDC dc(this);
Paint(dc);
}
void MyGLCanvas::Paint(wxDC &dc)
{
//All OpenGL drawing code here
glFlush();
SwapBuffers();
}
void MyGLCanvas::OnPaint(wxPaintEvent& event)
{
wxPaintDC dc(this);
Paint(dc);
event.Skip();
}
You're doing several strange or downright wrong things here:
Do not skip the event in your OnPaint(). This is a fatal error in wxWidgets 2.8 under Windows and even though we clean up after your code if it does this in 3.0, it's still a bad idea.
Don't use wxClientDC, just call Refresh() and let Windows repaint the window by invoking your existing OnPaint() handler.
Don't call SetFocus() from Draw(), this really shouldn't be necessary.
Do call SetCurrent() before drawing. Always, not just in Draw().
I don't know which of those results in the problems you're seeing but you really should change all of them. See samples\opengl\cube\cube.cpp for an example of how to do it correctly.

How to monitor for swipe gesture globally in OS X

I'd like to make an OSX application that runs in the background and performs some function when a swipe down with four fingers is detected on the trackpad.
Seems easy enough. Apple's docs show almost exactly this here. Their example monitors for mouse down events. As a simple test, I put the following in applicationDidFinishLaunching: in my AppDelegate.
void (^handler)(NSEvent *e) = ^(NSEvent *e) {
NSLog(#"Left Mouse Down!");
};
[NSEvent addGlobalMonitorForEventsMatchingMask:NSLeftMouseDownMask handler:handler];
This works as expected. However, changing NSLeftMouseDownMask to NSEventMaskSwipe does not work. What am I missing?
Well, the documentation for NSEvent's +addGlobalMonitorForEventsMatchingMask:handler: gives a list of event it supports and NSEventMaskSwipe is not listed so... it's to be expected that it not work.
While the API obviously supports the tracking of gesture locally within your own application (through NSResponder), I believe gestures can't be track globally by design. Unlike key combinations, there are much lower forms/types of gestures... essentially only:
pinch in/out (NSEventTypeMagnify)
rotations (NSEventTypeRotation)
directional swipes with X amount of fingers (NSEventTypeSwipe)
There's not as much freedom. With keys, you have plenty of modifiers (control, option, command, shift) and the whole alphanumeric keys making plenty of possible combinations so it'd be easier to reduce the amount of conflicts with local-events and global-events. Similarly, mouse events are region-based; clicking in one region can easily be differenciated from clicking in another region (from both the program's and user's point-of-view).
Because of this lower possible combination of touch events, I believe Apple might purposely be restricting global (as in, one app, responding to one or more gestures for the whole system) usage for its own usage (Mission Control, Dashboard, etc.)

Extending Functionality of Magic Mouse: Do I Need a kext?

I recently purchased a Magic Mouse. It is fantastic and full of potential. Unfortunately, it is seriously hindered by the software support. I want to fix that. I have done quite a lot of research and these are my findings regarding the event chain thus far:
The Magic Mouse sends full multitouch events to the system.
Multitouch events are processed in the MultitouchSupport.framework (Carbon)
The events are interpreted in the framework and sent up to the system as normal events
When you scroll with one finger it sends actual scroll wheel events.
When you swipe with two fingers it sends a swipe event.
No NSTouch events are sent up to the system. You cannot use the NSTouch API to interact with the mouse.
After I discovered all of the above, I diassembled the MultitouchSupport.framework file and, with some googling, figured out how to insert a callback of my own into the chain so I would receive the raw touch event data. If you enumerate the list of devices, you can attach for each device (trackpad and mouse). This finding would enable us to create a framework for using multitouch on the mouse, but only in a single application. See my post here: Raw Multitouch Tracking.
I want to add new functionality to the mouse across the entire system, not just a single app.
In an attempt to do so, I figured out how to use Event Taps to see if the lowest level event tap would allow me to get the raw data, interpret it, and send up my own events in its place. Unfortunately, this is not the case. The event tap, even at the HID level, is still a step above where the input is being interpreted in MultitouchSupport.framework.
See my event tap attempt here: Event Tap - Attempt Raw Multitouch.
An interesting side note: when a multitouch event is received, such as a swipe, the default case is hit and prints out an event number of 29. The header shows 28 as being the max.
On to my question, now that you have all the information and have seen what I have tried: what would be the best approach to extending the functionality of the Magic Mouse? I know I need to insert something at a low enough level to get the input before it is processed and predefined events are dispatched. So, to boil it down to single sentence questions:
Is there some way to override the default callbacks used in MultitouchSupport.framework?
Do I need to write a kext and handle all the incoming data myself?
Is it possible to write a kext that sits on top of the kext that is handling the input now, and filters it after that kext has done all the hard work?
My first goal is to be able to dispatch a middle button click event if there are two fingers on the device when you click. Obviously there is far, far more that could be done, but this seems like a good thing to shoot for, for now.
Thanks in advance!
-Sastira
How does what is happening in MultitouchSupport.framework differ between the Magic Mouse and a glass trackpad? If it is based on IOKit device properties, I suspect you will need a KEXT that emulates a trackpad but actually communicates with the mouse. Apple have some documentation on Darwin kernel programming and kernel extensions specifically:
About Kernel Extensions
Introduction to I/O Kit Device Driver Design Guidelines
Kernel Programming Guide
(Personally, I'd love something that enabled pinch magnification and more swipe/button gestures; as it is, the Magic Mouse is a functional downgrade from the Mighty Mouse's four buttons and [albeit ever-clogging] 2D scroll wheel. Update: last year I wrote Sesamouse to do just that, and it does NOT need a kext (just a week or two staring at hex dumps :-) See my other answer for the deets and source code.)
Sorry I forgot to update this answer, but I ended up figuring out how to inject multitouch and gesture events into the system from userland via Quartz Event Services. I'm not sure how well it survived the Lion update, but you can check out the underlying source code at https://github.com/calftrail/Touch
It requires two hacks: using the private Multitouch framework to get the device input, and injecting undocumented CGEvent structures into Quartz Event Services. It was incredibly fun to figure out how to pull it off, but these days I recommend just buying a Magic Trackpad :-P
I've implemented a proof-of-concept of userspace customizable multi-touch events wrapper.
You can read about it here: http://aladino.dmi.unict.it/?a=multitouch (see in WaybackMachine)
--
all the best
If you get to that point, you may want to consider the middle click being three fingers on the mouse instead of two. I've thought about this middle click issue with the magic mouse and I notice that I often leave my 2nd finger on the mouse even though I am only pressing for a left click. So a "2 finger" click might be mistaken for a single left click, and it would also require the user more effort in always having to keep the 2nd finger off the mouse. Therefor if it's possible to detect, three fingers would cause less confusion and headaches. I wonder where the first "middle button click" solution will come from, as I am anxious for my middle click Expose feature to return :) Best of luck.