Ive always thought this would be cool, and now the OS technology seems it could really make it easy to implement-
Is there a known/easy way to hook up dual mice as inputs to a multi-touch enabled OS, such as Win7, and use one in each hand to simulate two hands (or fingers?) on the screen? This would make it easy to stretch, rotate, etc and simulate a lot of the gestures used on touchscreens.
I think it might be alot of fun for certain kinds of games, and many artistic apps as well.
In Windows, you can use the Direct Input API included in DirectX 8+ to read the independent input of as many mice as desired. Easiest way is to get ahold of several USB mice and connect them all at once.
Also, you don't need to have a 3D view whatsoever to take advantage of DirectInput, you can use access the API from a regular Win32 or .Net app.
For instance, the PC game Ricochet Infinity allows two mice as input for its two player co-op mode.
A platform independent solution is for multiple mouse input is http://www.icculus.org/manymouse/
Like the Windows-only solution posted, This still wont let you do multi-touch in chrome, or the android emulator (Though, both should be relatively simple to implement), etc.
If you want to simulate multi-touch in other peoples software, things are a little trickier.
Just about any non-windows system can support multiple pointers for the main interface: http://en.wikipedia.org/wiki/Multi-Pointer_X
Chrome supports simulating touch events, unsure on multi-touch
it is possible to simulate mouse input to touches pointer with Multi Touch Vista
Related
I am looking for a wireless communication technology for exchanging data between devices via sound in ultrasonic frequencies.It is possible to communicate with two mobile devices.I want to communicate a mobile and an embedded device.Is it possible?Any device is working with this protocol?
Of course it is possible. Back in the 1970's my TV remote control used ultrasound to change the channel and turn the TV off. The control was somewhat rudimentary IIRC a short press changed the channel up and a long press turned the TV off. It worked quite reliably for these functions.
Providing more functionality would require a more complicated modulation scheme which, as has been said in another answer, would be prone to interference from other sound sources. This probably explains why infra-red communucation signals are used in more modern remote control systems.
It is possible - why shouldn't it be? Smartphones are just embedded computers too. I imagine getting CE/FCC/etc certifications with such an embedded device will not be so easy. And production testing ...
But is it feasible? Probably not. Power consumption is a lot higher than with any RF-link, it's more susceptible to noise (quite literally) and the required components (microphone+speaker) are bigger than RF-components (antenna).
And then there's a whole bunch of other things you need to keep in mind when working with ultrasound, starting with the plastic design of the embedded device. But also things like the effect of ultrasound on people and their pets etc.
I know this is not directly programming related, but is there a way to purposely limit the signal strength on a testing mobile device to determine how your app performs under weak signal conditions?
I have an app that streams video and audio to a server, and need to test how it performs in low signal areas.. Any suggestions please?
One realistic way to do it is put it in a weak Faraday cage. You can make one or buy a bag or other pre-manufactured cage that protects against radio transmissions. As long as it's not too strong, it should weaken but not completely block the signal.
you can use a software like network link conditioner on OSX and netlimiter on windows, they have options for bandwidth limiting and even packet loss and presets for different typical situations plus the ability to create some yourself, you can just create a wifi network on your machine and connect to it from the device you want to test
please not that iOS has network link conditioner built-in (you can find it under the developer menu in settings), while android may have something on a rooted device (never tried anything though)
If run run your app in a simulator, many have options for emulating poor signal conditions.
There is at least one open source project whose aim is to simulate different network conditions for exactly the type of testing you are describing:
https://github.com/facebook/augmented-traffic-control
This can work in a cellular network but in this would most likely require your own base stations etc. This is possible via other open source projects (e.g. http://openbsc.osmocom.org/trac/), but is likely not necessary as you can probably simulate the same affect with the WiFi test set up.
I have heared vulkan will unify the initialisation on different operating systems. Does that mean vulkan creates the window, handles mouse/keyboard events so I can avoid using os specific programming?
It won't. Window creation will be platform specific and an WSI extension will let you link the window to a renderable Image that you can push to the screen.
From information gleaned out of the presentations that have been given I expect that you will use a platform specific WSI Extension to create a Swapchain for your window.
Then each time you wish to push a frame to the screen you need to acquire a presentable image from the swapchain; render to it and then present it.
see this slide pack from slide 109 onward.
No, Vulkan is a low-level API for accessing GPUs. It does not deal with windows and input. In fact it can be used easily in a "headless" manner with no visual output at all.
Porbably not, Vulkan API is an Graphics Library much like OpenGL.
Where in Linux Ubuntu OpenGL is used for animation effects of the desktop in Unity and could be replaced with Vulkan for better performance.
But I don't think Windows will change it as they have their own DirectX Graphics Library and would be weird if they use something else instead their own software.
The most applications that are going to benefit from Vulkan are Games and other software that uses either 2D or 3D rendering.
It's very likely that most of the games are going to change to Vulkan because it's Cross-platform and therefore they will gain more users which equals to more profit.
Khronos (Vulkan API developers) are also bringing out tools that will largely port your application from OpenGL or DX12 to Vulkan therefore requiring less development/porting from the software developers side.
So...
Window creation, likely. (Although the code behind the window is CPU side, the library that draws the window on screen might be using Vulkan) - this differs greatly from which OS, distribution and version you are working on.
Mouse/keyboard events, no as this doesn't require any graphical calculations but CPU calculations.
Window (frames) are general desktop manager controls; you could display the vulkan app's content in the client area, otherwise vulkan would have to provide interfaces to the desktop manager for window creation (GUI library). Someone could simply create a device context (DC in windows, similar for the X server) then manage the "vulkan app" manually like a windowed game with no chrome (frameless), but this would be a great deal of work right now.
Old hand Windows developer bible addressing device context and rendering among many other things: Programming Windows®, Fifth Edition (Developer Reference) 5th Edition. Very good read and from an agnostic point of view provides a great deal of carry over knowledge loosely applicable to most systems.
Suppose I have a 3D (but not stereoscopic) DirectX game or program. Is there a way for a second program (or a driver) to change the camera position in the game?
I'm trying to build a head-tracking plugin or driver that I can use for my DirectX games/programs. An inertial motion sensor will give me the position of my head but my problem is using that position data to change the camera position, not with the hardware/math concerns of head tracking.
I haven't been able to find anything on how to do this so far, but iZ3D was able to create two cameras near the original camera and use it for stereoscopic stuff, so I know there exists some hook/link/connection into DirectX that makes camera manipulation by a second program possible.
If I am able to get this to work I'll release the code.
-Shane
Hooking Direct3D calls in its nature is just hooking DLL calls. I.e. its not something special to D3D but just a generic technique. Try googling for "hook dll" or start from here: [C++] Direct3D hooking sample. As it always happens with hooks there are many caveats and you'll have to make a pretty huge boilerplate to satisfy all needs of the hooked application.
Though, manipulation with camera in games usually gives not good results. There are at least two key features of modern PC game which will severely limit your idea:
Pre-clipping. Almost any game engine filters out objects that are behind the viewing plane. So when you rotate camera to a side you won't see the objects you'd expect to see in a real world - they were just not sent to D3D since game doesn't know that viewing plane has changed.
Multiple passes rendering. Many popular post processing effects are done in extra passes (either thru the whole scene or just part of it). Mirrors and "screens" are the most known such effects. Without knowing what camera you're manipulating with you'll most likely just break the scene.
Btw, #2 is the reason why stereoscopic mode is not 100% compatible with all games. For example, in Source engine HDR scenes are rendered in three passes and if you don't know how to distinguish them you'll do nothing but break the game. Take a look at how nVidia implements their stereoscopic mode: they make a separate hook for every popular game and even with this approach it's not always possible to get expected result.
Im planning to build a Embedded system which is almost like an organizer i.e. which handles contacts, games, applications & wifi/2G/3G for internet. I planned to build the UI with QML because of its easy to use and quick application building nature. And to have a linux kernel.
But after reading these articles:
http://qt-project.org/forums/viewthread/5820 &
http://en.roolz.org/Blog/Entries/2010/10/29_Qt_QML_on_embedded_devices.html
I am depressed and reconsidering my idea of using QML!
My hardware will be with these configurations : Processor around 600MHz, RAM 128MB and no GPU.
Please give comments on this and suggest me some alternatives for this.
Thanks in Advance.
inblueswithu
I have created a QML application for Nokia E63 which has 369MHz processor, 128MB RAM. I don't think it has a GPU. The application is a Stop Watch application. I have animated button click events like jumping (jumping balls). The animations are really smooth even when two button jumps at the same time. A 600MHz processor is expected to handle QML easily.
This is the link for the sis file http://store.ovi.com/content/184985. If you have a Nokia mobile you can test it.
May be you should consider building QML elements by hand instead of doing it from Photoshop or Gimp. For example using Item in the place of Rectangle will be optimal. So you can give it a try. May be by creating a rough sketch with good amount of animations to check whether you processor can handle that. Even if it don't work as expected then consider Qt to build your UI.
QML applications work fine on low-specs Nokia devices. I have made one for 5800 XPressMusic smartphone without any problems.