I need to share surface between direct 3d 10.1 device and direct 3d 11 device to permit my application to render Sprite sharing surface between diirect 2d and direct 3d 10.1.
I've read this topic http://msdn.microsoft.com/en-us/library/windows/desktop/ee913554(v=vs.85).aspx
but ther is only example for sharing between d2d and d3d 10.1 and not for sharing between d3d 10.1 and 11, can someone give me code example?
Surface sharing between different DirecX devices is done via IDXGIResource::GetSharedHandle and ID3D11Device::OpenSharedResource
Here is another answer that explains the process in more details.
Related
Can I use kinect sensor for testing my algorithms related to depth measurement? Have someone already tried this?
I have researched a bit, and have thus few questions -
is there a linux driver to work with kinect?
Which kinect is advisable? Kinect v1 or v2?
is there a way that I can get the data on my computer using a USB cable? As far as I have seen, the kinect needs to be modified ( i.e add a 12 V power supply ). Does anyone know the specifications of this power supply? How many ampere should the power supply support?
lastly why is there is such a massive price difference between the usb adaptor for Kinect V1 ( for xBox 360 - 4 pounds ) and Kinect V2 ( for Xbox one - 50 pounds), although both of them simply divert power and data cable as far as I understand.
I'm not sure whether the Kinect sensor is appropriate for testing your algorithms given that I don't know the specifics, but to answer your other questions:
Yes, there are drivers such as OpenKinect's libfreenect for Kinectv1 or libfreenect2 for Kinectv2 for Linux.
Note that I only have experience with the official Kinect SDK. If you care about skeletal tracking quality, you should probably use the official Kinect SDK on Windows. If you don't care about the skeleton tracking, that gives you a lot more options.
Kinect v2 - it has better specs. Certain requirements might call for using Kinect v1, but generally, Kinect v2 is the default choice.
No, you need the adapter/power supply to connect it to a PC. The official power supply is 2.67A at 12V. There are many tutorials online for DIY, such as this YouTube video: How to Hack Xbox One Kinect to Work on Windows 10 PC
Supply and demand. The adapters are no longer being manufactured and there is more demand for the Kinect v2 adapters.
Earlier this week I received the Intel RealSense D435 camera and now I am discovering its capabilities. After doing a few hours of research, I discovered the previous version of the SDK had a 3D model scan example application. Since SDK 2.0, this example application is no longer present making it harder to create 3D models with the camera.
I have managed to create various Point cloud (.ply) files with the camera, and now I try to use CloudCompare to generate a 3D model of it. However, without any success. Since my knowledge about computer vision is rather basic, I reach out to the community how it's possible to accomplish a 3D model scan using only PointClouds. The model can be rough, but preferable most noisy data needs to be removed.
Try recfusion 1.7.3 for scanning. 99 euro
The Kinect 2 for Windows is capable of detecting heart rates, but it's not implemented in the SDK. I've found one sample (https://k4wv2heartrate.codeplex.com/), but he has not released the source code for his work.
Have anyone used any open source library for detecting heart beat for Kinect 2 for Windows?
Well, since nobody has answered in 4 days, let me share what I know and see if it helps.
I haven't worked with Kinect 2 yet, nor have looked at its SDK capabilities. But I know that the trick to heartbeat detection lies in the user's image when viewed through a camera with infrared capabilities. When there is a heartbeat, the skin color changes. Try to see if you can make the RGB camera enter that mode, and then simply detect the color changes in the user's skin.
I am very new to Kinect programming and am tasked to understand several methods for 3D point cloud stitching using Kinect and OpenCV. While waiting for the Kinect sensor to be shipped over, I am trying to run the SDK samples on some data sets.
I am really clueless as to where to start now, so I downloaded some datasets here, and do not understand how I am supposed to view/parse these datasets. I tried running the Kinect SDK Samples (DepthBasic-D2D) in Visual Studio but the only thing that appears is a white screen with a screenshot button.
There seems to be very little documentation with regards to how all these things work, so I would appreciate if anyone can point me to the right resources on how to obtain and parse depth maps, or how to get the SDK Samples work.
The Point Cloud Library (or PCL) it is a good starting point to handle point cloud data obtained using Kinect and OpenNI driver.
OpenNI is, among other things, an open-source software that provides an API to communicate with vision and audio sensor devices (such as the Kinect). Using OpenNI you can access to the raw data acquired with your Kinect and use it as a input for your PCL software that can process the data. In other words, OpenNI is an alternative to the official KinectSDK, compatible with many more devices, and with great support and tutorials!
There are plenty of tutorials out there like this, this and these.
Also, this question is highly related.
I am trying to do some work using Kinect and the Kinect SDK.
I was wondering whether it is possible to detect facial expressions (e.g. wink, smile etc) using the Kinect SDK Or, getting raw data that can help in recognizing these.
Can anyone kindly suggest any links for this ? Thanks.
I am also working on this and i considered 2 options:
Face.com API:
there is a C# client library and there are a lot of examples in their documentation
EmguCV
This guy Talks about the basic face detection using EmguCV and Kinect SDK and you can use this to recognize faces
Presently i stopped developing this but if you complete this please post a link to your code.
This is currently not featured within the Kinect for Windows SDK due to the limitations of Kinect in producing high-resolution images. That being said, libraries such as OpenCV and AForge.NET have been sucessfuly used to detected finger and facial recognition from both the raw images that are returned from Kinect, and also RGB video streams from web cams. I would use this computer vision libraries are a starting point.
Just a note, MS is releasing the "Kinect for PC" along with a new SDK version in february. This has a new "Near Mode" which will offer better resolution for close-up images. Face and finger recognition might be possible with this. You can read a MS press release here, for example:
T3.com
The new Kinect SDK1.5 is released and contains the facial detection and recognition
you can download the latest SDK here
and check this website for more details about kinect face tracking