How to create a swanky SurfaceSlider - slider

I am new to surface programming and stumbled upon this Image which I understand is a slider control on a tag visualization (in this case a card). This slider is
curved as opposed to conventional straight track
has a bigger thumb which displays the current position (thus eliminating the need of a separate label)
has a glowing feel (I understand this is due to overlapping controls with different blur radius)
Can anyone help with how to make such control.
-V

This is a custom-built control rather than a standard SurfaceSlider. It's not build using TagVisualizer either but that's only because the app that this picture shows was built ~2 years prior to TagVisualizer existing.
Now you should certainly use TagVisualizer to streamline an implementation of this but you'll still have to create a custom slider control - SurfaceSlider will not be a good fit because it assumes that the user is moving their finger linearly.
Within your custom arching slider control, you can use SurfaceThumb (which SurfaceSlider itself uses) to get the big glowing thumb... then just needs to listen to the Delta events on the thumb and move it along the constrained path as appropriate.

Related

Can a VkSurfaceKHR represent only a whole window? Or also a portion of a window (ie some rectangular widget)? [duplicate]

We have an application which has a window with a horizontal toolbar at the top. The windows-level handle we pass to Vulkan to create the surface ends up including the area behind the toolbar i.e. Vulkan is completely unaware of the toolbar and the surface includes the space "behind" it.
My question is, can a surface represent only a portion of this window? We obviously need not process data for the pixels that lie behind the toolbar, and so want to avoid creating a frame buffer, depth buffer etc. bigger than necessary.
I fully understand that I can accomplish this visually using a viewport which e.g. has an origin offset and height compensation, however to my understanding the frame buffer actually still contains information for pixels the full size of the surface (e.g. 800x600 for an 800x600 client-area window) even if I am only rendering to a portion of that window. The frame buffer then gets "mapped" and therefore squished to the viewport area.
All of this has sort of left me wondering what the purpose of a viewport is. If it simply defines a mapping from your image buffer to an area in the surface, is that not highly inefficient if your framebuffer contains considerably more pixels than the area it is being mapped to? Would it not make sense to rather section of portions in your application using e.g. different windows HWNDs FIRST, and then create different surfaces from then onwards?
How can I avoid rendering to an area bigger than necessary?
The way this gets handled for pretty much every application is that the client area of a window (ie: the stuff that isn't toolbars and the like) is a child window of the main frame window. When the frame is resized, you resize the client window to match the new client area (taking into account the new sizes of the toolbars/etc).
It is this client window which should have a Vulkan surface created for it.

How to track the location of a window belonging to another app

When screen sharing a specific window on macOS with Zoom or Skype/Teams, they draw a red or green highlight border around that window (which belongs to a different application) to indicate it is being shared. The border is following the target window in real time, with resizing, z-order changes etc.
See example:
What macOS APIs and techniques might be used to achieve this effect?
You can find the location of windows using CGWindowListCopyWindowInfo and related API, which is available to Sandboxed apps.
This is a very fast and efficient API, fast enough to be polled. The SonOfGrab sample code is great platform to try out this stuff.
You can also install a global event tap using +[NSEvent addGlobalMonitorForEventsMatchingMask:handler:] (available in sandbox) to track mouse down, drag and mouse up events and then you can respond immediately whenever the user starts or releases a drag. This way your response will be snappy.
(Drawing a border would be done by creating your own transparent window, slightly larger than, and at the same window layer as, the window you are tracking. And then simply draw a pretty green box into it. I'm not exactly sure about setting the z-order. The details of this part would be best as a separate question.)

Change The Size of the SubWindow and the Area It Covers In Unity

I'm creating a simple 2D racing game in Unity. The game has another subwindow that displays the enemy's view. Kinda like a small screen on top; that lets you know where your enemy is or what he's doing.
Currently, I'm using a secondary camera to follow the enemy and a render texture to limit the display of the subwindow as well as its size.
However, I want the size of the window to be flexible like if i want the ratio of the window to be 4:3 instead of a perfect square. With my current implementation, whenever I rescale the subwindow, it just rescales everything up including the view being displayed. What I want to happen is, when I rescale the subwindow, the area being displayed should just be wider. It should just cover more area because I made the window wider. I want the view to be independent.
Is there a way to do this with my current implementation? If not, how can I achieve what I want?
I'm new to Unity so I really hope someone could teach me. Thank you so much.
The are viewed is independant of the size of your window, if you want to change how much is displayed instead of just re fitting the same content in another size you'll have to work with the field of view http://docs.unity3d.com/ScriptReference/Camera-fieldOfView.html

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/

Objective-C draw a path and detect when it closes (forms a closed shape)

I'm fairly new to game programming (but not to programming) and I want to create a space ship which leaves a trail on the screen. Now my problem is to come up with a solution how to detect if the trail left from the ship forms a closed shape - eg. if the ship left a trail around an object, the object is caught inside its trail so to speak.
The direction I'm thinking is to draw the path of the trail on an image not visible on the screen and every now and then try to fill it with certain color and then check if fill is caught within the trail path. However it seems like a lot of overhead.
Any ideas how to do that? I'm using cocos2d if that's of any help
In game programming you often need to think more mathematically than visually.
First does your ship continuously leaves a trail on the screen? If yes, then it will be easier to know when the shape closes : you just have to remember the coordinate where your ship started to leave a trail, then wait for the trail to approach this coordinate another time (for example within a radius of 10 pixels, or else the user will need to be really accurate to hit exactly the same pixel to close the shape).
The visual representation of the trail is only here for the user, you'll never use it to compute anything. What you will do is to keep in memory the path followed by the ship's trail : a polygon, which is nothing else than the list of coordinates it followed.
Then after you know that your shape is closed, you have to determine if an object is inside your polygon or not. It's possible that objective-c or cocos2d (I don't know much about it) already contains a built-in function to know if a point is inside a polygon. In java there is the Polygon class which makes this really easy. If you don't find anything you can do it yourself, there are already great answers about this subject on SO, here is a nice one : How can I determine whether a 2D Point is within a Polygon?