wxPanel flickering/failure when window is inactive - wxwidgets

Basically, I have a wxWidget application that implements OpenGL - the latter is displayed on a panel that can be updated through user input (clicking, dragging, etc), wxTimer, or events generated by external processes. The problem arises when focus is shifted to another window (whether an internal dialog box, or another application entirely) - the wxPanel ceases to update anywhere from immediately to after a few seconds, particularly if the other window is on top of it (sometimes, a small part of the panel that was obscured will still continue to update). Reactivating the application or resizing the window "unfreezes" the panel, and normal operation continues.
This is an issue I've always had in wxWidgets, be it with an OpenGL panel such as in this case, or otherwise. Generally, I've been able to get around it by making numerous SwapBuffer() calls in between a Freeze() and a Thaw() upon window refocusing, window resizing, or something similarly kludgy, but these all have the potential to produce flicker or other non-negligible visual artifacts, and additionally can affect performance if done every frame (such as when an animation needs to continue playing in an inactive window).
An indeterminate period of trial and error could probably produce something nice and kludgy for this also, but I'd really like to know, what is the "right" way to handle this problem? Many thanks in advance.
Here's a skeleton of the code for reference:
void MyGLCanvas::Draw(bool focus, int parentID) //This is what's called by the rest of the application
{
if (focus) { SetFocus(); }
SetCurrent();
wxClientDC dc(this);
Paint(dc);
}
void MyGLCanvas::Paint(wxDC &dc)
{
//All OpenGL drawing code here
glFlush();
SwapBuffers();
}
void MyGLCanvas::OnPaint(wxPaintEvent& event)
{
wxPaintDC dc(this);
Paint(dc);
event.Skip();
}

You're doing several strange or downright wrong things here:
Do not skip the event in your OnPaint(). This is a fatal error in wxWidgets 2.8 under Windows and even though we clean up after your code if it does this in 3.0, it's still a bad idea.
Don't use wxClientDC, just call Refresh() and let Windows repaint the window by invoking your existing OnPaint() handler.
Don't call SetFocus() from Draw(), this really shouldn't be necessary.
Do call SetCurrent() before drawing. Always, not just in Draw().
I don't know which of those results in the problems you're seeing but you really should change all of them. See samples\opengl\cube\cube.cpp for an example of how to do it correctly.

Related

How to track the location of a window belonging to another app

When screen sharing a specific window on macOS with Zoom or Skype/Teams, they draw a red or green highlight border around that window (which belongs to a different application) to indicate it is being shared. The border is following the target window in real time, with resizing, z-order changes etc.
See example:
What macOS APIs and techniques might be used to achieve this effect?
You can find the location of windows using CGWindowListCopyWindowInfo and related API, which is available to Sandboxed apps.
This is a very fast and efficient API, fast enough to be polled. The SonOfGrab sample code is great platform to try out this stuff.
You can also install a global event tap using +[NSEvent addGlobalMonitorForEventsMatchingMask:handler:] (available in sandbox) to track mouse down, drag and mouse up events and then you can respond immediately whenever the user starts or releases a drag. This way your response will be snappy.
(Drawing a border would be done by creating your own transparent window, slightly larger than, and at the same window layer as, the window you are tracking. And then simply draw a pretty green box into it. I'm not exactly sure about setting the z-order. The details of this part would be best as a separate question.)

GLUT animation leads to 100% utilization of 1 core when the window is invisible

I developed a Python program that uses PyOpenGL and GLUT for window management to show an animation. In order to have the animation run at the fastest possible framerate, I set
glutIdleFunc(glutPostRedisplay)
as recommended e.g. here.
That works well, I get a steady 60 FPS with not a lot of CPU load.
However, as soon as the window is hidden by another window, one CPU core jumps to 100% utilization.
My suspicion is that while the window is visible, the rate at which the glutDisplayFunc is called is limited, because it contains a call glutSwapBuffers() which waits for vsync; and that this limitation fails when it is invisible.
I tried to solve the problem by keeping track of visibility (through a glutVisibilityFunc) and putting the following code at the beginning of my glutDisplayFunc:
if not visible:
time.sleep(0.1)
return
This does not however have the desired effect.
What's happening here, and how do I avoid it?
I found the solution here,
and it is obvious once you know it: Disable the glutPostRedisplay as glutIdleFunc when the window becomes invisible. Concretely, use a glutVisibilityFunc like this:
def visibility(state):
if state == GLUT_VISIBLE:
glutIdleFunc(glutPostRedisplay)
else:
glutIdleFunc(None)

How to find out type of manipulation in windows store app

I'm handling ManipulationCompleted event in my control in windows rt application.
But ManipulationCompletedRoutedEventArgs has no information about type of manipulation executed. How to find out was it a pinch or something else?
It depends what specifically you'd like to find out. The Cumulative property shows what was done overall in the manipulation and so the Scale field will tell you if scaling happened which is a result of a pinch gesture. Really though you should be handling ManipulationDelta and immediately responding to each delta event. ManipulationCompleted is where you'd perhaps run a snap animation or something of that sort. For more detailed information about where each finger touches the screen you could look at the Pointer~ events.

How to monitor for swipe gesture globally in OS X

I'd like to make an OSX application that runs in the background and performs some function when a swipe down with four fingers is detected on the trackpad.
Seems easy enough. Apple's docs show almost exactly this here. Their example monitors for mouse down events. As a simple test, I put the following in applicationDidFinishLaunching: in my AppDelegate.
void (^handler)(NSEvent *e) = ^(NSEvent *e) {
NSLog(#"Left Mouse Down!");
};
[NSEvent addGlobalMonitorForEventsMatchingMask:NSLeftMouseDownMask handler:handler];
This works as expected. However, changing NSLeftMouseDownMask to NSEventMaskSwipe does not work. What am I missing?
Well, the documentation for NSEvent's +addGlobalMonitorForEventsMatchingMask:handler: gives a list of event it supports and NSEventMaskSwipe is not listed so... it's to be expected that it not work.
While the API obviously supports the tracking of gesture locally within your own application (through NSResponder), I believe gestures can't be track globally by design. Unlike key combinations, there are much lower forms/types of gestures... essentially only:
pinch in/out (NSEventTypeMagnify)
rotations (NSEventTypeRotation)
directional swipes with X amount of fingers (NSEventTypeSwipe)
There's not as much freedom. With keys, you have plenty of modifiers (control, option, command, shift) and the whole alphanumeric keys making plenty of possible combinations so it'd be easier to reduce the amount of conflicts with local-events and global-events. Similarly, mouse events are region-based; clicking in one region can easily be differenciated from clicking in another region (from both the program's and user's point-of-view).
Because of this lower possible combination of touch events, I believe Apple might purposely be restricting global (as in, one app, responding to one or more gestures for the whole system) usage for its own usage (Mission Control, Dashboard, etc.)

Is there a way to get push to scroll functionality in Windows 8 metro Apps?

In the Windows 8 Consumer Preview, moving the mouse towards the left or right edge in the start screen causes the content to scroll.
The standard controls (and currently released preview apps) does not seem to support this.
Is there a way to make this work?
I asked this question at TechEd North America this year, after one of the sessions given by Paul Gusmorino, a lead program manager for the UI platform.
His answer was that no, apps can't do push-against-the-edge-to-scroll. WinJS and WinRT/XAML apps don't even get the events you would need to implement it yourself. Apps get events at the level of the mouse pointer, and once the mouse pointer hits the edge of the screen, it can't move any farther and you don't get any more events. (Well, it might wiggle up and down a little bit, but not if it hit a corner. At any rate, it's not good enough to scroll the way the Start screen does.)
He mentioned that, if you were writing a C++/DirectX app, you would be able to get the raw mouse input you needed to do this yourself -- you can get low-level "device moved by DX,DY" rather than the high-level "pointer moved to X,Y". I'm guessing this is how the Start screen does it, though I didn't think to ask.
But no, it's not built-in, it's not something you can implement yourself (unless you write your app in low-level C++/DirectX), and it sounds like they have no plans to add it before Windows 8 ships.
Personally, I think it's pretty short-sighted of them to have apps feel cripped compared to the Start screen, but evidently they're not concerned about little things like usability. </rant>
You can do the following to get information on mouse moving beyond the screen and use the delta information to scroll your content.
using Windows.Devices.Input;
var mouse = MouseDevice.GetForCurrentView();
mouse.MouseMoved += mouse_MouseMoved;
private void mouse_MouseMoved(MouseDevice sender, MouseEventArgs args)
{
tb.Text = args.MouseDelta.X.ToString();
}