I have a USB gamepad that I'm attempting to write a user-space driver for. Originally, I wrote the driver to work with Linux (using libusb), and everything seems to work fine there.
Basically, I get all the useful data (e.g. X, Y coordinates and button presses) from the device and store it in an array that is printf'd onto the screen. Now I can get this far with OS X easily.
However, I don't know how to broadcast that data so it can be used by other applications. I realize that one workaround is to call CGEventCreateKeyboardEvent, where I could basically map the device's buttons to keystrokes. However, I would rather not use emulated keystrokes because some information (for example the X, Y coordinates) is difficult to represent as keystroke events.
I'm unfamiliar with IOKit, so is there a function available there (or somewhere else buried in Apple documentation) that will help me do this? And, if so, will I have to re-write my current driver to be able to utilize the IOKit/other-Apple-library way? Or is there some, super-easy-to-use function where I can take my data and send it to any foreground application?
Related
just a quick question here...
So, as you know, when you create a Vulkan device, you need to make sure the physical device you chose supports presenting to a surface with vkGetPhysicalDeviceSurfaceSupportKHR() right?, that means, you need to create the surface before creating the device.
Now lets say that at run-time, the user may press a button which makes a new window open, and stuff is going to be drawn to that window, so you need a new surface right?, but the device has already been created...
Does this mean I have to create all the surfaces before I create the device or do I have to recreate the device, but, if need to recreate it, what happens with all the stuff that has been created/allocated from that device?
Does this mean I have to create all the surfaces before I create the device or do I have to recreate the device
Neither.
If the physical device cannot draw to the surface... then you need to find a physical device which can. This could happen if you have 2 GPUs, and each one is plugged into a different monitor. Each GPU can only draw to surfaces that are on their monitor (though sometimes there are ways for implementations to get around this).
So if the physical device for the logial VkDevice you're using cannot draw to the surface, you don't "recreate" the device. You create a new device, one which is almost certainly unable to draw to the surface that the old device could draw to. So in this case, you'd need 2 separate devices to render to the two surfaces.
But for most multi-monitor cases this isn't an issue. If you have a single GPU with multi-monitor output support, then any windows you create will almost certainly be compatible with that GPU. Integrated GPU + discrete GPU cases also tend to support the same surfaces.
The Vulkan API simply requires that you check to see if there is an incompatibility, and then deal with it however you can. Which could involve moving the window to the proper monitor or other OS-specific things.
I'm trying to tap the currently selected output audio device on macOS, so I basically have a pass through listener that can monitor the audio stream currently being output without affecting it.
I want to copy this data to a ring buffer in real time so I can operate on it separately.
The combination of Apple docs and (outdated?) SO answers are confusing as to whether I need to write a hacky kernel extension, can utilise CoreAudio for this, or need to interface with the HAL?
I would like to work in Swift if possible.
Many thanks
(ps. I had been looking at this and this)
I don't know about kernel extensions - their use of special "call us" signing certificates or the necessity of turning off SIP discourages casual exploration.
However you can use a combination of CoreAudio and HAL AudioServer plugins to do what you want, and you don't even need to write the plugin yourself, there are several open source versions to choose from.
CoreAudio doesn't give you a way to record from (or "tap") output devices - you can only record from input devices, so the way to get around this is to create a virtual "pass through" device (AudioServerPlugin), not associated with any hardware, that copies output through to input and then set this pass through device as default output and record from its input. I've done this using open source AudioServer Plugins like BackgroundMusic and BlackHole [TODO: add more].
To tap/record from the resulting device you can simply add an AudioDeviceIOProc callback to it or set the device as the kAudioOutputUnitProperty_CurrentDevice of an kAudioUnitSubType_HALOutput AudioUnit
There are two problems with the above virtual pass through device approach:
you can't your hear output anymore, because it's being consumed by the pass through device
changing default output device will switch away from your device and the tap will fall silent.
If 1. is a problem, then a simple is to create a Multi-Output device containing the pass through device and a real output device (see screenshot) & set this as the default output device. Volume controls stop working, but you can still change the real output device's volume in Audio MIDI Setup.app.
For 2. you can add a listener to the default output device and update the multi-output device above when it changes.
You can do most of the above in swift, although for ringbuffer-stowing from the buffer delivery callbacks you'll have to use C or some other language that can respect the realtime audio rules (no locks, no memory allocation, etc). You could maybe try AVAudioEngine to do the tap, but IIRC changing input device is a vale of tears.
Suppose I have a 3D (but not stereoscopic) DirectX game or program. Is there a way for a second program (or a driver) to change the camera position in the game?
I'm trying to build a head-tracking plugin or driver that I can use for my DirectX games/programs. An inertial motion sensor will give me the position of my head but my problem is using that position data to change the camera position, not with the hardware/math concerns of head tracking.
I haven't been able to find anything on how to do this so far, but iZ3D was able to create two cameras near the original camera and use it for stereoscopic stuff, so I know there exists some hook/link/connection into DirectX that makes camera manipulation by a second program possible.
If I am able to get this to work I'll release the code.
-Shane
Hooking Direct3D calls in its nature is just hooking DLL calls. I.e. its not something special to D3D but just a generic technique. Try googling for "hook dll" or start from here: [C++] Direct3D hooking sample. As it always happens with hooks there are many caveats and you'll have to make a pretty huge boilerplate to satisfy all needs of the hooked application.
Though, manipulation with camera in games usually gives not good results. There are at least two key features of modern PC game which will severely limit your idea:
Pre-clipping. Almost any game engine filters out objects that are behind the viewing plane. So when you rotate camera to a side you won't see the objects you'd expect to see in a real world - they were just not sent to D3D since game doesn't know that viewing plane has changed.
Multiple passes rendering. Many popular post processing effects are done in extra passes (either thru the whole scene or just part of it). Mirrors and "screens" are the most known such effects. Without knowing what camera you're manipulating with you'll most likely just break the scene.
Btw, #2 is the reason why stereoscopic mode is not 100% compatible with all games. For example, in Source engine HDR scenes are rendered in three passes and if you don't know how to distinguish them you'll do nothing but break the game. Take a look at how nVidia implements their stereoscopic mode: they make a separate hook for every popular game and even with this approach it's not always possible to get expected result.
Ive always thought this would be cool, and now the OS technology seems it could really make it easy to implement-
Is there a known/easy way to hook up dual mice as inputs to a multi-touch enabled OS, such as Win7, and use one in each hand to simulate two hands (or fingers?) on the screen? This would make it easy to stretch, rotate, etc and simulate a lot of the gestures used on touchscreens.
I think it might be alot of fun for certain kinds of games, and many artistic apps as well.
In Windows, you can use the Direct Input API included in DirectX 8+ to read the independent input of as many mice as desired. Easiest way is to get ahold of several USB mice and connect them all at once.
Also, you don't need to have a 3D view whatsoever to take advantage of DirectInput, you can use access the API from a regular Win32 or .Net app.
For instance, the PC game Ricochet Infinity allows two mice as input for its two player co-op mode.
A platform independent solution is for multiple mouse input is http://www.icculus.org/manymouse/
Like the Windows-only solution posted, This still wont let you do multi-touch in chrome, or the android emulator (Though, both should be relatively simple to implement), etc.
If you want to simulate multi-touch in other peoples software, things are a little trickier.
Just about any non-windows system can support multiple pointers for the main interface: http://en.wikipedia.org/wiki/Multi-Pointer_X
Chrome supports simulating touch events, unsure on multi-touch
it is possible to simulate mouse input to touches pointer with Multi Touch Vista
For the Windows version of my application, I'm using WM_INPUT and registering the mouse device directly to get the most precise movements. How would I go about doing this in X11?
There are several ways to do this, depending on the framework you are using.
If you intend to do this with "Xlib", the most basic way to program for X11, you should take a look at Xlib manual, with special attention to XInput.
More elaborated information can be retrieved by using XI2 (XInput version 2)
Other higher level frameworks can make things easier. Take a look at Qt. It's one of the best GUI API's I've ever seen.
Tell us if you need more.
Here are eleven links that provide reference, advice, and two different approaches to get raw mouse movements on Linux. Link #2 (Python) shows the traditional way to read the mouse device and #1 (C), #3 (Python), and #4 (C) show the events way. The ps/2 protocol is covered in link #5. Link #6 is a command line example.
Each of the two approaches bypasses X (Xlib) by getting from a device driver file the stream of deltas in the x and y directions. Rather than deltas, Xlib traditionally provides just the window (or screen) position so you must keep moving the pointer from the edge of the window if you want to keep getting readings in a particular direction. [See link #8 for a quick discussion on the "warping to center" hack used by many programs with X.] The new XInput 2.0 extension (link #10), however, offers an X API for getting the deltas (you may have to link at least -lX11 and -lXi).
The essence of either of the two device approaches is this:
1: Open for reading something like the "/dev/input/mouse0" device or something similar (it depends on your distro naming and also on which method you want to use, /dev/input/event4 for the other approach).
2: Read from it. Every so many bytes represents a mouse movement. For example, for the ps/2 raw protocol method, eg, by opening mouse[n] as above, you basically want just the 2nd and 3rd bytes of every 3 byte sequence if you just care about x, y movement. The second byte represents X. The third represents Y. Each of the two bytes is a single int8_t (signed 8 bit) quantity. Positive X seems to be to the right, no problem; however, positive Y seems to be upward (as in most math text books) while on a screen positive Y is usually considered going downward from the upper left corner of the screen (vs. going up from the lower left as the origin in a typical Cartesian coordinate system).
Other notes: You may want to use open/read rather than fopen/getc if you are running X (see link #7 and code in other links). If you want mouse button state for the first 3 buttons or want to confirm the signedness of x or y, then look at the first byte as well. [I doubt overflow is an issue in ordinary "stream" use, but those bits are there as well.] You may notice if you experiment on the Linux command line, eg, a quick "cat" to "less", that most movements are just of 1 unit (lots of ^A ^# and <FF>). This may depend on how the mouse is configured and on how capable is the mouse. Link #5 covers the simple protocol and even covers the Microsoft intellimouse standard extension for buttons 4 and 5 and the mouse wheel. Link #11 covers the controller chips/logic configuration for ps/2 mouse or keyboard. It explains how the mouse is configured by this interface. There is a tool (link #9, xdotool) that simulates mouse clicks to help automate windowing tasks. Finally, you may need to be root to read directly from these devices (depends on your distro).