I have this steering wheel device (Thrustmaster Ferrari 458 Spider) that's only intended platform to be played on, is Xbox One, though I am attempting to make drivers for it on PC (windows 10), it has slight functionality apart from the fact that the steering wheel axis is set to the left trigger axis. I want to create drivers for this to remap the axes, and first step I guess is to read the stream and register an XInput device for it
Related
I have a mouse mouse from speedlink that is able to do a lot of things, like changing the colours of the leds, but I can only do those things with the provided software from speedlink.
Is it possible to code your own software that controls the led lights of the mouse?
Yes, but you would have to have the hardware specifications to know what needs to be sent to the mouse to accept the commands you're looking for. Usually these things are not published or readily accessible.
I bought two Microsoft Basic Optical mice, identical to the one which performed the functions I needed really well. The first one would grab and flip the grid with object in one position within the grid with a right click of the Mouse. The Blender 2.79 3D modelling app is what I am using the mice for, I plugged the two new mice into two computers, to try them out, they would NOT do the grid grabbing and flipping, though they would perform the other functions. the grid grabbing function is important so that You can move the scene and inspect the model, or appear to walk around a solid object in the real world.
We ported our application from Qt3 to Qt5. It runs smoothly under Windows but not under Linux (X11). With Qt3 there is no problem with Windows or Linux.
Inside the application there is a canvas of about 1000x800 pixels. A simple vector graphic is drawn onto the canvas. The user clicks into the canvas, holding the mouse button pressed an moves the mouse. Each mouse move results in a repaint.
We registered the milliseconds in each stage:
Start of MouseMove-event handling: 10581
call of update or repaint (makes no difference which one)
Handling of resulting Paint-Event: 10583
Painting finishes: 10584
return from update/repaint: 10687 (!)
I do not find any reason for this lag of 100ms (at each mouse move event!)
I need help!
In Qt4.8 the native graphics backend was deprecated.
Remote X11 is no longer drawn with X11 calls but by painting onto a canvas and transmitting the result (a bitmap) to the client. This may result in larger bandwidth requirements and a slower when running X11 over network.
See also this
I like to make an overlay with the following properties:
should work at least on Windows 8.1
should be on top on everything, like a mouse cursor
should incorporate the pixels which are already on the background, like a blur filter
no flickering
Details to each of this points:
1) I assume that WDM is activated and DirectX 11.2 is used. Sure it would be nice to have it working on other Windows versions but this has no priority.
2) The problem is that with simply using the WS_EX_TOPMOST, menus from applications are over my overlay. In my case this really hurts as I like to display something with the same properties as a cursor. Imagine that a cursor suddenly is hidden if you open a menu -> unacceptable.
3) I like to read the pixels from the Windows desktop, including any effect Windows applies (like blur), and use this information for my filter. If I add my overlay, as described in 2, I should be able to get a fresh unobstructed copy of the background in the next frame and not read out my own overlay.
4) If I just write something into the Windows desktop directly, it gets overwritten immediately on the next frame by Windows itself. This is not acceptable.
One example of such an application is a magnifying glass, which exactly has all the properties I need. But for this case Windows 8.1 has an API. In contrast I like to write a program which displays a hand on the desktop (which is controlled by Leap Motion) which influences the Windows desktop, so you almost "feel" how you move your hand over the desktop.
If I write a tiny DirectX and/or OpenGL application for myself this is all very easy:
render all the regular stuff to a texture
use this texture for a post processing filter and add all my stuff on top of it
render just a quad to the back buffer
But I like to do that for the whole Windows desktop.
I found many different application, but they are to no use for me:
application which claim to be on top, are still behind menus. This normally doesn't really hurt, but is unacceptable for a cursor-alike thing
screen capturing programs which hook them self in all running programs are nice, but I want to hook myself into WDM
normally screen capturing programs do not draw anything into the back buffer, so they get every frame a new unobstructed back buffer
My questions can be boiled down to: How can I write my own magnifying glass for Windows 8.1.
I fear that my only serious option is to hook myself into WDM, what I try to avoid.
I'm happy to hear any idea how to achieve this, or hints to application which are doing what I describe.
I am using Microsoft Kinect SDK and I would like to know whether it is possible to get Depth Frame, Color Frame as well as the skeleton data for all the frames at 30fps? Using Kinect Explorer I can see that the color and the depth frame are nearly at 30fps, but as soon as I choose the view the skeleton, it drops to around 15-20fps.
Yes, it is possible to capture color/depth at 30fps while capturing the skeleton.
See image below, just in case you think me dodgy. :) This is a raw Kinect Explorer running, straight from Visual Studio 2010. My work development platform is an i5 Dell laptop.
I'm doing some stereoscopic work which means I have need to work with two instances of various filters (i.e. a camera source that receives an IP stream), and this is proving not to be trivial.
I even tried copying the IPCamfilter.ax to IPCamfilter.ax and manually making new CLSID entries in the reg, and the clone shows up, but won't work. Any ideas?
Should I edit the clone filters binary to change its CLSID and then register it? Or is there a simple way to use GraphEdit to do this?
Do you work with two cameras or with one camera and you wanna have two pictures.
In the first case, there are some filters which work just with one connected device (in case for e.g. firewire, cameras have to be connected to two different controllers).
In the latter case, you can use the Infinite Pin Tee Filter to get two streams of the one device. You can test that in GraphEdit as well.
There's nothing in COM that prevents you creating two instances of the same clsid, so you're solving the wrong problem by trying to change the clsid. There must be something in the filter internals that prevents multiple use in the same process.
If you can't get access to the source to fix it, you could have two capture graphs in separate processes and then use a bridge of some sort to combine the two outputs in a third graph (or in your application).
G
SplitCam is a freeware virtual video clone and video driver for connecting several applications to a single video capture source. Usually, if you have a camera connected to your PC, you cannot use it in more than one application at the same time, and there is no standard Windows options that makes it possible. Split Camera allows you to easily multiply your video source in any conferencing software like ICQ, Yahoo, MSN Messenger, or whatever.
Video Processing Filter is a powerful transform filter that allows rotate the video in 90, 180, and 270 degrees ,keep aspect ratio when rotated the video in 90 and 270 degrees , flip the video, convert a RGB video stream to Grayscale and invert color. Support rotate the video in 90, 180, and 270 degrees in any Directshow base application. Support keep aspect ratio when rotated the video in 90 and 270 degrees.