Strange Tab control default colours in 2 apps on same monitor - vb.net

I have 2 small apps, both employ a tab control. All display properties seem to be identical on the tab controls, but when displayed on the same monitor, one displays in a brighter colour, making the controls (labels, Text boxes, etc) appear messy. Both apps look ok on the development PC, but on the users PC there is a noticable difference. Anyone got a reason why, and more importantly a solution?
The Apps are written in VB.Net
Thanks in advance for your help
Paul

We have a CRM Application on my enterprise using Firebrick background color.
It never displays the same color depending on the monitor. See that every monitor
have different display panels and different gamma values.
Are they running the same Graphic Card Model? nvidia, ati. The output display is not only given by the monitor. The graphic card is involved as well.

Related

Scale adjustment on Windows Forms in VB.Net

Ok so i know this seems to have been asked a million times but!!!!!!
I have written an application in VB.NEt utilizing windows forms, my screen is set to 100% scaling and all works A OK!! Until a user runs the application on a PC where the scale is set at 125% or 150% and the windows form becomes too big for the screen and they can not access certain features of the form. The same applies if they have two monitors one set at a different scale to the other, if you drag it over the the high percentage screen the form becomes too big. This clearly has something to do with AutoScaleMode and DPI Aware but having tried all combinations the issue remains!!! :-( and ideas??
I have tried changing AutoScale Mode to DPI, Inherit, Font and None and i have trued changing DPI Aware to False in the Windows settings.

Windows Desktop Manager Overlay

I like to make an overlay with the following properties:
should work at least on Windows 8.1
should be on top on everything, like a mouse cursor
should incorporate the pixels which are already on the background, like a blur filter
no flickering
Details to each of this points:
1) I assume that WDM is activated and DirectX 11.2 is used. Sure it would be nice to have it working on other Windows versions but this has no priority.
2) The problem is that with simply using the WS_EX_TOPMOST, menus from applications are over my overlay. In my case this really hurts as I like to display something with the same properties as a cursor. Imagine that a cursor suddenly is hidden if you open a menu -> unacceptable.
3) I like to read the pixels from the Windows desktop, including any effect Windows applies (like blur), and use this information for my filter. If I add my overlay, as described in 2, I should be able to get a fresh unobstructed copy of the background in the next frame and not read out my own overlay.
4) If I just write something into the Windows desktop directly, it gets overwritten immediately on the next frame by Windows itself. This is not acceptable.
One example of such an application is a magnifying glass, which exactly has all the properties I need. But for this case Windows 8.1 has an API. In contrast I like to write a program which displays a hand on the desktop (which is controlled by Leap Motion) which influences the Windows desktop, so you almost "feel" how you move your hand over the desktop.
If I write a tiny DirectX and/or OpenGL application for myself this is all very easy:
render all the regular stuff to a texture
use this texture for a post processing filter and add all my stuff on top of it
render just a quad to the back buffer
But I like to do that for the whole Windows desktop.
I found many different application, but they are to no use for me:
application which claim to be on top, are still behind menus. This normally doesn't really hurt, but is unacceptable for a cursor-alike thing
screen capturing programs which hook them self in all running programs are nice, but I want to hook myself into WDM
normally screen capturing programs do not draw anything into the back buffer, so they get every frame a new unobstructed back buffer
My questions can be boiled down to: How can I write my own magnifying glass for Windows 8.1.
I fear that my only serious option is to hook myself into WDM, what I try to avoid.
I'm happy to hear any idea how to achieve this, or hints to application which are doing what I describe.

Emulate massive 2x4K screen size for interactive touchscreen project

I'm currently working on a project for an interactive visitor space and one of the interactive screens will be two landscape 4K screens side by side with an extended/stretched desktop.
So, the total screen resolution will be 7680px x 2180px. The application will be browser based and will allow multiple users to work with a media library and bring pictures and videos onto the screen and stretch, pan and play them and all that good stuff.
My problem is in testing this solution as we go through development. The actual screens are in Laguna Beach, CA and we are based in London. We can work with 1080p touchscreens, however there is nothing like testing on the real thing and I foresee difficulties.
Does anyone have any ideas how I could emulate this screen size, which would at least allow a little more confidence? I'm thinking of virtual machines, but not sure if this will even work.
Any help appreciated.
Many thanks
Pete
Try following this. http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003
Good luck!

General considerations for NUI/touch interface

For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/

Screen resolution issue

I have written an application. It works fine on some resolutions but most of the controls are over lapped when it is run on some other computers.
Is there any way of setting the application automatically when it is run on different computers?
Thanks
Furqan
You might want to take a look at this: http://msdn.microsoft.com/en-us/library/ms229649.aspx
(assuming this is not WPF)
You can do a lot with docking and anchoring to help your form maintain it's usability at different sizes. But, there are practical limits. You can enforce a minimum size and ensure that your controls resize appropriately down to that limit.