What is the nature of the gestures needed in Windows 8? - windows-8

Most of touchpads on laptops don't handle multitouch, hence are not able to send swipe gestures to the OS.
Would it be possible to send some gestures to Windows from an external device, like a Teensy, or a recent Arduino, that can already emulate a keyboard and a mouse. I could send buttons 4 and 5 (mouse wheel up and down), but I would like to send a real swipe gesture (for example with a flex sensor...).

One of the ways that you could work with arduino and similar is to use the Microsoft .NET Microframework, which is an open source code, available for no cost from: Micro Framework
There are other frameworks available for the Artuino that you might want to use. So if you have a great idea on how to utilize the sensor hardware, then the output must meet certain specifications.
To be able to connect to your hardware that reads gestures, you will need to understand how drivers are created, so take a look at this: Info on drivers.
To find that type of information you would need to take a look at above link, this is for sensors, which would appear to be not quite what you are looking for, you are looking to use "gestures" but first you have to be able to make the connection to your device, this guide MIGHT help. I have reviewed it for other reasons.
There is a bunch of stuff to dig through, but first of all, imo, is to understand how to get your software to communicate with Windows 8. Let me know if you have any other questions. I am not the best person, you might want to refer to the community at the Micro Framework link shown above.
Good luck.

That's perfectly possible. What your effectively suggesting is that you want to create your own input peripheral like a trackpad and use that to send inputs. As long as windows recognizes this device as an input source it will work.

Related

Applying Non-Standard Power Assertions & Creating Virtual HIDs

I've got a big ask here, but I am hoping someone might be able to help me. If there's another site you think this should be posted on, please let me know.
I'm the developer of the free app Amphetamine for macOS and I'm hoping to add a new feature to the app - keeping a Mac awake while in closed-display (clamshell) mode while not having a keyboard/mouse/power adapter/display connected to the Mac. I get requests to add this feature on an almost daily basis.
I've been working on a solution (and it's mostly ready) which uses a non-App Store helper app that must be download and installed separately. I could still go with that solution, but I want to explore one more option before pushing the separate app solution out to the world.
An Amphetamine user tipped me off that another app, AntiSleep can keep a Mac awake while in closed-display mode, while not meeting Apple's requirements. I've tested this claim, and it's true. After doing a bit of digging into how AntiSleep might be accomplishing this, I've come up with 2 possible theories so far (though there may be more to it):
In addition to the standard power assertion types, it looks like AntiSleep is using (a) private framework(s) to apply non-standard power assertions. The following non-standard power assertion types are active when AntiSleep is keeping a Mac awake: DenySystemSleep, UserIsActive, RequiresDisplayAudio, & InternalPreventDisplaySleep. I haven't been able to find much information on these power assertion types beyond what appears in IOPMLibPrivate.h. I'm not familiar at all with using private frameworks, but I assume I could theoretically add the IOPMLibPrivate header file to a project and then create these power assertion types. I understand that would likely result in an App Store review rejection for Amphetamine, of course. What about non-App Store apps? Would Apple notarize an app using this? Beyond that, could someone help me confirm that the only way to apply these non-standard power assertions is to use a private framework?
I suspect that AntiSleep may also be creating a virtual keyboard and mouse. Certainly, the idea of creating a virtual keyboard and mouse to get around Apple's requirement of having a keyboard and mouse connected to the Mac when using closed-display mode is an intriguing idea. After doing some searching, I found foohid. However, I ran into all kinds of errors trying to add and use the foohid files in a test project. Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app? I'm not asking for code help with that (yet). I'd just like some help determining whether it might be possible to do.
Thank you in advance for taking a look.
Would Apple notarize an app using this?
I haven't seen any issues with notarising code that uses private APIs. Currently, Apple only seems to use notarisation for scanning for inclusion of known malware.
Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app?
Taking a quick glance at the code of that project, it's clear it implements a kernel extension (kext). Those are not allowed on the App Store.
However, since macOS 10.15 Catalina, there's a new way to write HID drivers, using DriverKit. The idea is that the APIs are very similar to the kernel APIs, although I suspect it'll be a rewrite of the kext as a DriverKit driver, rather than a simple port.
DriverKit drivers are permitted to be included in App Store apps.
I don't know if a DriverKit based HID driver will solve your specific power management issue.
If you go with a DriverKit solution, this will only work on 10.15+.
I suspect that AntiSleep may also be creating a virtual keyboard and mouse.
I haven't looked at AntiSleep, but I do know that in addition to writing an outright HID driver, it's possible to generate HID events using user space APIs such as IOHIDPostEvent(). I don't know if those are allowed on the App Store, but as far as I'm aware, IOKitLib is generally fine.
It's possible you might be able to implement your virtual input device using those.

Is there a comprehensive list of Device information available through WinRT?

There are things I want to know about the device. Is it ARM or Intel? Does it support Bluetooth? What version of Windows is the user running? What is the resolution of the device? What is the IP of the device. Things like that. And, I know not everything is available. Instead of a question for every single information datum, is there a comprehensive list (or even a demo) that shows what is available?
You can use DeviceInformation.FindAllAsync() to list cameras, mics, audio output devices and external storage. For Accelerometer I think you need to catch exceptions when you try to use one. I doubt there is something to check for ARM/Intel or Windows version, although you can compile separate builds for ARM/Intel and use #defines to check the difference. For screen resolution I would use something like Window.Current.Bounds (assuming you are in full mode). IP, Bluetooth are probably things you might check in their own stacks (never needed those, so not really sure where). I haven't seen a demo that would show all these, but it sounds like something that might be worth adding to the toolkit...

Camera compatibility

I have an usb-camera with its drivers and dll with some functions to use this camera in my solutions. I want to use it in any wide-spread applications, to be able just to choose and use it in Skype, for instance. So. I want to develop something that will allow me to use this device as usual web-camera.
I've heard something about such technologies as "Upper-Level Filter Drivers" and "user-mode DirectShow source filter". Looks like it something that can help.
So the question is: what technologies exist for such tasks? What technology should I choose to solve my problem if I have no skills of driver development?
Skype still uses DirectShow for video capture and user mode filter will do the job. Still Skype makes certain unreasonable assumptions that limit compatible source filters, such as if the developers stopped development/testing as soon as they had their favorite USB cam working and ignoring all other devices users might possibly want to attach.
One of the options you were suggested (in Russian - 1, 2) was to develop a kernel mode driver so that your device is visible to apps through standard WDM Video Capture Filter. This is possible and would work, though in my opinion it is a huge overkill.
Fitting custom source filter is not easy because Skype does not like a debugger attached, however driver development is really a completely different story.
The Skype Forum link you refer to is clearly misleading. The poster complains that Skype update broke compatibility with video sources. And response from admin is about audio devices, and is irrelevant.

What do I need to have to be ready to write a Compact Framework application communicating with GPS?

Simply I am asked to write an application for a smart device (smart cell phone), which will get the GPS coordinates from the device itself.
I have no smart device at all. And I am kind of lost among questions like how can I check if the device have a gps by using the code, if it has how can I obtain them in a "standard" way, do I need to be using frameworks like GeoFrameWork?
So, may somebody list the must or most required things I need to have ready?
Geoframeworks GPS.NET is free these days and it's pretty comprehensive so there's no point reinventing the wheel. It's also friendly to beginners which helps. I strongly recommend downloading it and playing with some of the sample apps. It's a bit tricky if you don't have a physical device to play around with but it does have GPS emulation classes that you can use.
All you need is a copy of VS2008 Pro with the Smart Device SDK installed.

How do I get input from an XBox 360 controller?

I'm writing a program that needs to take input from an XBox 360 controller. The input will then be sent wirelessly to an RC Helicopter that I am building.
So far, I've learned that this can be done using either the XInput library from DirectX, or the Input framework in XNA.
I'm wondering if there are any other options available. The scope of my program is rather small, and having to install a large gaming library like DirectX or XNA seems like excessive. Further, I'd like the program to be cross platform and not Microsoft specific.
Is there a simple lightweight way I can grab the controller input with something like Python?
Edit to answer some comments:
The copter will have 6 total propellers, arranged in 3 co-axial pairs. Basically, it will be very similar to this, only it will cost about $1,000 rather than $15,000. It will use an Arduino for onboard processing, and Zigbee for wireless control.
The 360 controller was selected because it is well designed. It is very ergonomic and has all of the control inputs needed. For those familiar with helicopter controls, the left joystick will control the collective, the right joystick with control the pitch and roll, and the analog triggers will control the yaw. The analog triggers are a big feature for the 360 controller. PS and most others do not have them.
I have a webpage for the project, but it is still pretty sparse. I do plan on documenting the whole design though, so eventually it will be interesting.
http://tricopter.googlecode.com
On a side note, would it kill Google to have a blog feature for googlecode projects?
I would like the 360 controller input program to run in both Linux and Windows if possible. Eventually though, I'd like to hook the controller directly to an embedded microcontroller board (such as Arduino) so that I don't have to go through a computer, but its not a high priority at the moment.
It is not all that difficult. As the earlier guy mentioned, you can use the SDL libraries to read the status of the xbox controller and then you can do whatever you'd like with it.
There is a SDL tutorial: http://sdl.beuc.net/sdl.wiki/Handling_Joysticks which is fairly useful.
Note that an Xbox controller has the following:
two joysticks:
left joystick is axis 0 & 1;
left trigger is axis 2;
right joystick is axis 3 & 4;
right trigger is axis 5
one hat (the D-pad)
11 SDL buttons
two of them are joystick center presses
two triggers (act as axis, see above)
The upcoming SDL v1.3 also will support force feedback (aka. haptic).
I assume, since this thread is several years old, you have already done something, so this post is primarily to inform future visitors.
PyGame can read joysticks, which is what the X360 controller shows up as on a PC.
Well, if you really don't want to add a dependency on DirectX, you can use the old Windows Joystick API -- Windows Multimedia -> Joystick Reference in the platform SDK.
The standard free cross plaform game library is Simple DirectMedia Layer, originally written to port Windows games to Unix (Linux) systems. It's a very basic, lightweight API that tends to support the minimal subset of features on each system, and it has bindings for most major languages. It has very basic joystick and gamepad support (no force feedback, for example) but it might be sufficient for your needs.
Perhaps the Mono.Xna library has added GamePad support, which would provide the cross platform functionality you were looking for:
http://code.google.com/p/monoxna/
As far as the concerns about the library being too heavy weight, sure, for this option it may be true ... however, it could open up opportunities to do some nice visualization in the future.
disclaimer: I'm not familiar with the status of the mono xna project, so it may not have added this feature yet. But still, 'tis an option :-)