Changing display resolution ingame practices - game-engine

Question : What are the uses of changing display resolution in full screen mode in GAMES and should it be allowed?
More info : The way i see it,
Privilege perspective : When you request your window manager for a window context, you also have the option via a flag usually to set the window to a fullscreen mode, In fullscreen mode, you can set the window resolution to a resolution of your choice usually from a given list, In most cases, that will change the display resolution, affecting your desktop environment, Should a program that is not your window manager have control over your display monitor?
Bugs : OS / Software depended in mind, When a program faults and trigger a crash, if the program fails to exit correctly, You may still end up with the same resolution asked by your software (also applies to gamma ramps when not using shaders).
Practices : Since the dawn of pc video games as far as i know, You always ended up having a "Native resolution" and everything just getting re-scaled to it, In today's world, You also have "Dynamic resolution scaling" methods and other tricks with frame buffer objects, Do we gain any benefit from actually changing the display resolution?
Performance wise, You can always change other settings to improve performance, Not just resolution.
Aspect ratio in mind, If your display monitor by default is using 16:9 for your desktop environment, Would you actually change it to something else? Obviously you should have the option, but should it be changed through a software that is not your window manager?
Thanks in advance, I may end up revising the question during the first few hours after posting.

Always pick nice defaults that don't change the display mode on launch. If the user wants to go to the graphics settings page and set it to 1024x768 full-screen mode though, let them - they probably know what they're doing.
As for performance, there can be reasons to use non-native full-screen. In addition to the performance benefit you get from drawing fewer pixels, you get access to fixed-function hardware that scales up the image on scan-out. If you scale up in the renderer itself, this is at least one extra full-screen read + write. If you're fill-rate bound, this can impact your framerate.
Something to consider though is that if you have any kind of post-process pass, you will usually be doing a read+write anyway, so you can do scaling at the same time effectively for free. Also, using native resolution for your final target is the only way to get crisp edges for things like text. Often, a game will use a scaled down target for 3D rendering, and merge with a native-resolution overlay.
In a nutshell:
Pick good defaults (windowed, or windowed-fullscreen native resolution).
Allow the user to set the window target resolution and mode (windowed, windowed-fullscreen, or fullscreen).
Allow the user to independently select the 3D resolution scale factor (25% - 200%).
Always draw the HUD and any text at the window target resolution.

Related

How to track the location of a window belonging to another app

When screen sharing a specific window on macOS with Zoom or Skype/Teams, they draw a red or green highlight border around that window (which belongs to a different application) to indicate it is being shared. The border is following the target window in real time, with resizing, z-order changes etc.
See example:
What macOS APIs and techniques might be used to achieve this effect?
You can find the location of windows using CGWindowListCopyWindowInfo and related API, which is available to Sandboxed apps.
This is a very fast and efficient API, fast enough to be polled. The SonOfGrab sample code is great platform to try out this stuff.
You can also install a global event tap using +[NSEvent addGlobalMonitorForEventsMatchingMask:handler:] (available in sandbox) to track mouse down, drag and mouse up events and then you can respond immediately whenever the user starts or releases a drag. This way your response will be snappy.
(Drawing a border would be done by creating your own transparent window, slightly larger than, and at the same window layer as, the window you are tracking. And then simply draw a pretty green box into it. I'm not exactly sure about setting the z-order. The details of this part would be best as a separate question.)

Blur UI on High DPI windows system

wxWidgets 3.1 claims to fix the Windows High DPI issues. It works too but I see blur UI (fonts/bitmaps) looks stretched.
I went through the https://learn.microsoft.com/en-us/windows/desktop/hidpi/high-dpi-desktop-application-development-on-windows
I did the manifest changes to make my application DPI-aware, it removed the blur effect but application layout went wrong, every layout looks smaller (unusable UI).
Note* issue more vigilant on 3K and 4K system. Hardcoded pixel sizes are not scaling (like 400px width button, 500pixel width panel etc).
wxWidgets gives you a (relatively simple) way to make your application work in high DPI, but doesn't -- and can't -- do it automatically for you, in particular only sizer-based layouts without hardcoded pixel sizes will work correctly and you do need to provide your own higher definition artwork.
Concerning the existing pixel values, the simplest (even though not really the best) way to make them work better is to put FromDIP() calls around them.
Also note that you don't need to do anything special for pixel values in XRC, they're already interpreted as being resolution-independent pixels and are scaled according to the DPI automatically.

Prevent wxWidgets app from scaling sizes on higher dpi

I'm writing a wxWidgets (3.1.0) app that is supposed to work on windows and mac.
On windows when i set the scaling of the text to more than 100% the sizes of my controls get all messed up. I have a dpi manifest that says my app is dpi aware. I also set the font pixel size on my dialogs and that works to some extent. When i set the size of some element from code it is resized to that pixel size which is what i need, but any size that is set in the xrc file gets scaled up. Also when i try to reduce the size of any wxSpinCtrl it can be reduced normally to some point but then only the text box gets smaller and the buttons remain unproportionally large. So is there a way to tell my app not to scale any sizes and just let everything be exactly the same pixel size as it would be on a normal dpi (despite the fact that my app will look small on higher resolutions)?
There is no way to prevent the proper scaling from being applied using wxWidgets API and I don't think this is going to change because it just doesn't seem to make any sense.
However rebuilding wxWidgets with wxHAVE_DPI_INDEPENDENT_PIXELS defined should trick the library into thinking that the underlying graphical toolkit already scales the pixel values and so prevent it from doing it on its own. I've never tested this but, AFAICS, this should result in what you want.
Nevertheless let me reiterate that what you want is totally wrong and the real fix for this problem is to explain it to whoever decided to do it.

How to size my UI components for a Cocoa mac app given the potential variety of resolutions it could be displayed on?

Cocoa uses a drawing system (user coordinate space) measured in "points" which are resolution independent...sounds great
While we need to be concerned with our app running in many resolutions, Cocoa is going to take care of that for us in (1) above...sounds too good to be true!
It does scale our controls as resolution changes...this is good.
BUT the screen size increases as my resolution increases...this is not good, I though we had a drawing canvas that was independent of the resolution!
What if the controls shrink to silly small levels as the resolution increases - should I be concerned about this?
To summarize: is their a "standard" resolution I should design for and then all automatic scaling by Apple will automatically look fine?
[Confused while reading the Apple Progammer Guide on the topic of Drawing]
You do not need to be concerned about this. The user is only allowed to select resolutions which make sense given the physical size of the display, so the standard controls will always be "large enough". You just need to test your app on Retina and non-Retina displays (and ideally both at the same time, with an external 1x monitor plugged on a 2x machine ; move your windows between the two screens and check that your images update accordingly).

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/