NSScreenNumber changes (randomly)? - objective-c

In my application I need to distinguish between different displays, which I do by using the NSScreenNumber key of the deviceDescription dictionary provided by NSScreen. So far everything worked flawlessly, but now all of a sudden I sometimes get a different screen ID for my main screen (it's a laptop and I haven't attached a second screen in months, its always the same hardware). The id used to be 69676672 but now most of the time I get 2077806975.
At first I thought I might be misinterpreting the NSNumber somehow, but that doesn't seem to be the case, I also checked by using the CGMainDisplayID() function and I get the same value. What is even weirder is that some of the Apple applications still seem to get the old ID: Eg. the desktop image is referenced in its config file using the screen ID and when updating the desktop image the desktop image app by Apple uses the "correct" (=old) ID.
I am starting to wonder if there might have been a change in a recent update (10.7.1 or 10.7.2) that led to the change, has anybody else noticed something similar or had this issue before?
Here is the code that I use:
// This is in an NSScreen category
- (NSNumber *) uniqueScreenID {
return [[self deviceDescription] objectForKey:#"NSScreenNumber"];
}
And for getting an int:
// Assuming screen points to an instance of NSScreen
NSLog(#"Screen ID: %i", [[screen uniqueScreenID] intValue]);
This is starting to get frustrating, appreciate any help/ideas, thanks!

For Mac's that have built-in graphics and discrete graphics cards (such as MacBook Pro models with on-board Intel graphics and a separate graphics card), the display ID can change when the system automatically switches between the two. You can disable "Automatic graphics switching" in the Energy Saver prefs panel to test whether this is the cause of your screen number changes (when disabled, will always use the discrete graphics card).
On such systems, the choice of which graphics is in use at a particular time is tied to the applications that are currently running and their needs. I believe any use of OpenGL by an application would cause a switch to the discrete graphics card, for instance.
If you need to notice when such a switch occurs while your application is running, you can register a callback (CGDisplayRegisterReconfigurationCallback) and examine the changes that occur (kCGDisplayAddFlag, kCGDisplayRemoveFlag, etc). If you're trying to match a display to one previously used/encountered, you would need to go beyond just comparing display id's.

Related

How to capture App Screen as Video with Audio in Mac OSX?

I am writing a MacOS or OSX application where I need to record only the View of my application (Not the Whole display) with the Audio it emits.
Think it as a game app and I need to record the complete GamePlay View Of the Application.How should I go about doing this?
I am aware of "AVCaptureScreenInput" and, the example. But how to capture only the view of my application?
From the website you posted:
Note: By default, AVCaptureScreenInput captures the entire screen. You may set its cropRect property to limit the capture rectangle to a subsection of the screen.
Just set this property to the windows/views rect and you're done
Of course you need to update and restart the recording when the windows/views rect changes.
Read the document carefully, there is a comment, about the displays:
// If you're on a multi-display system and you want to capture a secondary display,
// you can call CGGetActiveDisplayList() to get the list of all active displays.
// For this example, we just specify the main display.
// To capture both a main and secondary display at the same time, use two active
// capture sessions, one for each display. On Mac OS X, AVCaptureMovieFileOutput
// only supports writing to a single video track.

Windows Desktop Manager Overlay

I like to make an overlay with the following properties:
should work at least on Windows 8.1
should be on top on everything, like a mouse cursor
should incorporate the pixels which are already on the background, like a blur filter
no flickering
Details to each of this points:
1) I assume that WDM is activated and DirectX 11.2 is used. Sure it would be nice to have it working on other Windows versions but this has no priority.
2) The problem is that with simply using the WS_EX_TOPMOST, menus from applications are over my overlay. In my case this really hurts as I like to display something with the same properties as a cursor. Imagine that a cursor suddenly is hidden if you open a menu -> unacceptable.
3) I like to read the pixels from the Windows desktop, including any effect Windows applies (like blur), and use this information for my filter. If I add my overlay, as described in 2, I should be able to get a fresh unobstructed copy of the background in the next frame and not read out my own overlay.
4) If I just write something into the Windows desktop directly, it gets overwritten immediately on the next frame by Windows itself. This is not acceptable.
One example of such an application is a magnifying glass, which exactly has all the properties I need. But for this case Windows 8.1 has an API. In contrast I like to write a program which displays a hand on the desktop (which is controlled by Leap Motion) which influences the Windows desktop, so you almost "feel" how you move your hand over the desktop.
If I write a tiny DirectX and/or OpenGL application for myself this is all very easy:
render all the regular stuff to a texture
use this texture for a post processing filter and add all my stuff on top of it
render just a quad to the back buffer
But I like to do that for the whole Windows desktop.
I found many different application, but they are to no use for me:
application which claim to be on top, are still behind menus. This normally doesn't really hurt, but is unacceptable for a cursor-alike thing
screen capturing programs which hook them self in all running programs are nice, but I want to hook myself into WDM
normally screen capturing programs do not draw anything into the back buffer, so they get every frame a new unobstructed back buffer
My questions can be boiled down to: How can I write my own magnifying glass for Windows 8.1.
I fear that my only serious option is to hook myself into WDM, what I try to avoid.
I'm happy to hear any idea how to achieve this, or hints to application which are doing what I describe.

SDL2: Share renderer between multiple windows

I have a set of images, and I need show it on different displays. So I create two windows and two renderers. But some image may be show on several displays. And if the texture was created using rendererOne, and shown with rendererTwo, we have a program crash.
If I create texture in runtime each time, when I need show - I have falling of FPS.
How it is better to solve this problem? Can I share renderer between windows (on different displays)? Or can I share texture between different renderers?
p.s. I can mark image's name like "Image1.one.two.png" or "Image2.one.png" and so on, and create two copies of Image1 and one copy of Image2, but it very difficult way, and require many RAM.
p.p.s. I don't use OpenGL directly.
I have solved this problem by using a lazy initialization of texture. I store SDL_Surface, and when I need show some Texture then I check it:
if (m_texture == nullptr || !m_texture->CompatibleWithRenderer(renderer))
{
m_texture = new Texture(renderer, m_surface);
}

SwapChainBackgroundPanel not calling Rendering event when GPU picking - DirectX and XAML

I have already sort of asked this question already here (Previous Question) but it only got a handful of views and zero answers/comments so I thought I'd give it a go again with some more info that I've found.
I basically have a Windows Store DirectX + XAML app that I'm developing. I currently have the problem that the Rendering event of the SwapChainBackgroundPanel that I use for DirectX rendering (as per the Windows 8 example on MSDN) sometimes isn't called when the user is interacting with the app.
It will continue to update if I am doing something with the camera such as changing what it's looking at based on touch/mouse position but it won't be called if I am picking and I don't know why.
I use the standard GPU picking method (where I render the scene with a unique color for each object and then take a 1x1 texture of the press area to find the selected object) but when I am using this picking technique to select multiple objects (the user drags their finger/mouse over many objects) Rendering isn't being called. So in effect what happens is, lots of objects get selected but the user only sees this when they remove their finger/stop pressing the mouse button.
Is there any reason why this is happening? Is it because of the GPU picking method? And if so is there a way around it rather than using the ray-trace picking method (which considerably slows down picking for a large number of objects)?
Has anyone else had this problem? Is there an explanation from Microsoft anywhere that it is deliberate that rendering doesn't get called while this is happening?
Thanks for your time.

How to monitor for swipe gesture globally in OS X

I'd like to make an OSX application that runs in the background and performs some function when a swipe down with four fingers is detected on the trackpad.
Seems easy enough. Apple's docs show almost exactly this here. Their example monitors for mouse down events. As a simple test, I put the following in applicationDidFinishLaunching: in my AppDelegate.
void (^handler)(NSEvent *e) = ^(NSEvent *e) {
NSLog(#"Left Mouse Down!");
};
[NSEvent addGlobalMonitorForEventsMatchingMask:NSLeftMouseDownMask handler:handler];
This works as expected. However, changing NSLeftMouseDownMask to NSEventMaskSwipe does not work. What am I missing?
Well, the documentation for NSEvent's +addGlobalMonitorForEventsMatchingMask:handler: gives a list of event it supports and NSEventMaskSwipe is not listed so... it's to be expected that it not work.
While the API obviously supports the tracking of gesture locally within your own application (through NSResponder), I believe gestures can't be track globally by design. Unlike key combinations, there are much lower forms/types of gestures... essentially only:
pinch in/out (NSEventTypeMagnify)
rotations (NSEventTypeRotation)
directional swipes with X amount of fingers (NSEventTypeSwipe)
There's not as much freedom. With keys, you have plenty of modifiers (control, option, command, shift) and the whole alphanumeric keys making plenty of possible combinations so it'd be easier to reduce the amount of conflicts with local-events and global-events. Similarly, mouse events are region-based; clicking in one region can easily be differenciated from clicking in another region (from both the program's and user's point-of-view).
Because of this lower possible combination of touch events, I believe Apple might purposely be restricting global (as in, one app, responding to one or more gestures for the whole system) usage for its own usage (Mission Control, Dashboard, etc.)